IGGAN
Inverse Graphics GAN: Learning to Generate 3D Shapes from Unstructured 2D Data
view repo
Recent work has shown the ability to learn generative models for 3D shapes from only unstructured 2D images. However, training such models requires differentiating through the rasterization step of the rendering process, therefore past work has focused on developing bespoke rendering models which smooth over this nondifferentiable process in various ways. Such models are thus unable to take advantage of the photorealistic, fully featured, industrial renderers built by the gaming and graphics industry. In this paper we introduce the first scalable training technique for 3D generative models from 2D data which utilizes an offtheshelf nondifferentiable renderer. To account for the nondifferentiability, we introduce a proxy neural renderer to match the output of the nondifferentiable renderer. We further propose discriminator output matching to ensure that the neural renderer learns to smooth over the rasterization appropriately. We evaluate our model on images rendered from our generated 3D shapes, and show that our model can consistently learn to generate better shapes than existing models when trained with exclusively unstructured 2D images.
READ FULL TEXT VIEW PDF
Traditional computer graphics rendering pipeline is designed for procedu...
read it
Differentiable rendering has paved the way to training neural networks t...
read it
Efficient rendering of photorealistic virtual worlds is a long standing...
read it
We develop PlatonicGAN to discover 3D structure of an object class from ...
read it
We propose a differentiable sphere tracing algorithm to bridge the gap
b...
read it
Recent advances in deep generative models have led to an unprecedented l...
read it
Voronoi diagrams are highly compact representations that are used in var...
read it
Inverse Graphics GAN: Learning to Generate 3D Shapes from Unstructured 2D Data
Generative adversarial networks (GANS) have produced impressive results on 2D image data (Karras et al., 2019; Brock et al., 2019). Many visual applications, such as gaming, require 3D models as inputs instead of just images, however, and directly extending existing GAN models to 3d, requires access to 3D training data (Wu et al., 2016; Riegler et al., 2017). This data is expensive to generate and so exists in abundance only for only very common classes. Ideally we’d like to be able to learn to generate 3D models while training with only 2D image data which is much more widely available and much cheaper and easier to obtain.
Training with only 2D data allows us to use any 3D representation. Our interest is in creating 3D models for gaming applications which typically rely on 3D meshes, but direct mesh generation is not ammenable to generating arbitary topologies since most approaches are based on deforming a template mesh. So we instead choose to work with voxel representations because they can represent arbitrary topologies, can easily be converted to meshes using the marching cubes algorithm, and can be made differentiable by representing the occupancy of each voxel by a real number
which identifies the probability of voxel occupancy.
In order to learn with an endtoend differentiable model, we need to differentiate through the process of rendering the 3D model to a 2D image, but the rasterization step in rendering is inherently nondifferentiable. As a result, past work on 3D generation from 2D images has focused on differentiable renderers which are hand built from scratch to smooth over this nondifferentiable step in various ways. However, standard photo realistic industrial renderers created by the gaming industry (e.g. UnReal Engine, Unity) are not differentiable, and so with such methods we cannot use these renderers, and must rely instead on the simple differentiable renderers build by the research community. In particular two aspects of the rendering process are nondifferentiable: (1) the rasterization step inside of the renderer is inherently nondifferentiable as a result of occlusion and (2) sampling the continuous voxel grid to generate a mesh is also not differentiable. This is second step is required because typical industrial renderers take a mesh as input and we can easily convert a binary voxel grid to a mesh, but continuous voxel inputs do not have a meaningful mesh representation. So rendering a continuous voxel grid using an offtheshelf renderer requires first sampling a binary voxel grid from the distribution defined by the continuous voxel grid.
In this paper we introduce the first scalable training technique for 3D generative models from 2D data which utilises an offtheshelf nondifferentiable renderer. Examples of the result of our method can be seen in Figure 1. Key to our method is the introduction of a proxy neural renderer based on the recent successes of neural rendering (NguyenPhuoc et al., 2018) which directly renders the continuous voxel grid generated by the 3D generative model. It addresses the two challenges of the nondifferentiability of the offtheshelf render as follows:
Differentiate through the Neural Renderer: The proxy neural renderer is trained to match the rendering output of the offtheshelf renderer given a 3D mesh input. This allows backpropagation of the gradient from the GAN discriminator through the neural renderer to the 3D generative model, enabling training using gradient descent.
Discriminator Output Matching:
In order to differentiate through the voxel sampling step we also train the proxy neural renderer using a novel loss function which we call
discriminator output matching. This accounts for the fact that the neural renderer can only be trained to match the offtheshelf renderer for binary inputs, which leaves it free to generate arbitrary outputs for the (typically) nonbinary voxel grids created by the generator. We constrain this by computing the discriminator loss of an image rendered by the neural renderer when passed through the discriminator. This loss is matched to the average loss achieved by randomly thresholding the volume, rendering the now binary voxels with the offtheshelf renderer, and passing the resulting image through the discriminator. This addresses the instancelevel nondifferentiability issue and instead targets the differentiable loss defined on the population of generated discrete 3D shapes, forcing the neural renderer to generate images which represent the continuous voxel grids as smoothly interpolation between the binary choice from the perspective of the discriminator.
We evaluate our model on a variety of synthetic image data sets generated from 3D models in ShapeNet (Chang et al., 2015) as well as the natural image dataset of Chanterelle mushrooms introduced in Henzler et al. (2019), and show that our model generates 3D meshes whose renders generate improved 2D FID scores compared to both Henzler et al. (2019) and a 2D GAN baseline.
Geometry Based Approaches (or 3D Reconstruction):
Reconstructing the underlying 3D scene from only 2D images has been one of the longstanding goals of computer vision. Classical work in this area has focused on geometry based approaches in the single instanced setting where the goal was only to reconstruct a single 3D object or scene depicted in one or more 2D images
(Bleyer et al., 2011; De Bonet and Viola, 1999; Broadhurst et al., 2001; Galliani et al., 2015; Kutulakos and Seitz, 2000; Prock and Dyer, 1998; Schönberger et al., 2016; Seitz et al., 2006; Seitz and Dyer, 1999). This early work was not learning based, however, and so was unable to reconstruct any surfaces which do not appear in the image(s).Learning to Generate from 3D Supervision: Learningbased 3D reconstruction techniques use a training set of samples to learn a distribution over object shapes. Much past work has focused on the simplest learning setting in which we have access to full 3D supervision. This includes work on generating voxels (Brock et al., 2016; Choy et al., 2016; Riegler et al., 2017; Wu et al., 2016; Xie et al., 2019), generating pointclouds (Achlioptas et al., 2018; Fan et al., 2017; Jiang et al., 2018; Yang et al., 2019; Achlioptas et al., 2018; Li et al., 2018), generating meshes (Groueix et al., 2018; Pan et al., 2019; Wang et al., 2018) and generating implicit representations (Atzmon et al., 2019; Chen et al., 2019; Genova et al., 2019; Huang et al., 2018; Mescheder et al., 2019; Michalkiewicz et al., 2019; Park et al., 2019; Saito et al., 2019; Xu et al., 2019). Creating 3D training data is much more expensive, however, because it requires either skilled artists or a specialized capture setup. So in contrast to all of this work we focus on learning only from unstructured 2D image data which is more readily available and cheaper to obtain.
Learning to Generate from 2D Supervision: Past work on learning to generate 3D shapes by training on only 2D images has mostly focused on differentiable renderers. We can categorize this work based on the representation used. Mesh techniques (Kanazawa et al., 2018; Chen et al., 2019; Genova et al., 2018; Henderson and Ferrari, 2019) are based on deforming a single template mesh or a small number of pieces (Henderson and Ferrari, 2019), while Loper and Black (2014) and Palazzi et al. (2018) use only a lowdimensional pose representation, so neither is amenable to generating arbitrary topologies. Concurrent work on implicit models (Niemeyer et al., 2019; Liu et al., 2019) can directly learn an implicit model from 2D images without ever expanding to another representation, but these methods rely on having camera intrinsics for each image, which is usually unavailable with 2D image data. Our work instead focuses on working with unannotated 2D image data.
The closet work to ours uses voxel representations (Gadelha et al., 2017; Henzler et al., 2019). Voxels can represent arbitrary topologies and can easily be converted to a mesh using the marching cubes algorithm. Furthermore, although it is not a focus of this paper, past work has shown that the voxel representation can be scaled to relatively high resolutions through the use of sparse representations (Riegler et al., 2017).Gadelha et al. (2017) employs a visual hull based differential renderer that only considers a smoothed version of the object silhouette, while Henzler et al. (2019) relies on a very simple emissionabsorption based lighting model. As we show in the results, both of these models struggle to take advantage of lighting and shading information which reveals surface differences, and so they struggle to correctly represent concavities like bathtubs and sofas.
In contrast to all previous work, our goal is to be able to take advantage of fullyfeatured industrial renderers which included many advanced shading, lighting and texturing features. However these renderers are typically not built to be differentiable, which is challenging to work with in machine learning pipelines. The only work we are aware of which uses an offtheshelf render for 3D generation with 2D supervision is
Rezende et al. (2016). In order to differentiate through the rendering step they use the REINFORCE gradients (Williams, 1992). However, REINFORCE scales very poorly with number of input dimensions,allowing them to show results on simple meshes only. In contrast, our method scales much better since dense gradient information can flow through the proxy neural renderer.Neural Rendering With the success of 2D generative models, it has recently become popular to skip the generation of an explicit 3D representation Neural Rendering
techniques focus only on simulating 3D by using a neural network to generate 2D images directly from a latent space with control over the camera angle
(Eslami et al., 2018; NguyenPhuoc et al., 2019; Sitzmann et al., 2019) and properties of objects in the scene (Liao et al., 2019). In contrast, our goal is to generate the 3D shape itself, not merely controllable 2D renders of it. This is important in circumstances like gaming where the underingly rendering framework is may be fixed, or where we need direct access to the underlying 3D shape itself, such as in CAD/CAM applications. We do however build directly on RenderNet NguyenPhuoc et al. (2018) which is a neural network that can be trained to generate 2D images from 3D shapes by matching the output of an offtheshelf renderer.Differentiating Through Discrete Decisions Recent work has looked at the problem of differentiating through discrete decisions. Maddison et al. (2017) and Jang et al. (2017) consider smoothing over the discrete decision and Tucker et al. (2017)
extends this with sampling to debias the gradient estimates. In section
3 we discuss why these methods cannot be applied in our setting. Liao et al. (2018)discusses why we cannot simply differentiate through the Marching Cubes algorithm, and also suggests using continuous voxel values to generate a probability distribution over 3D shapes. However in their setting they have ground truth 3D data so they directly use these probabilities to compute a loss and do not have to differentiate through the voxel sampling process as we do when training from only 2D data.
We wish to train a generative model for 3D shapes such that rendering these shapes with an offtheshelf renderer generates images that match the distribution of 2D training image dataset. The generative model
takes in a random input vector
and generate a continuous voxel representation of the 3D object . Then the voxels are fed to a nondifferentiable renderering process, where the voxels first are thresholded to discrete values , then the discretevalue voxels are renderred using the offtheshelf renderer (e.g. OpenGL) . In summary, this generating process samples a 2D image as follows:(1) 
Like many GAN algorithms, a discriminator is then trained on both images sampled from the 2D data distribution and generated images sampled from . We consider maximising e.g. the classificationbased GAN crossentropy objective when training the discriminator
(2) 
A typical GAN algorithm trains the generator by maximising a loss defined by the discriminator, e.g.
(3) 
For 2D image GAN training, optimisation is usually done by gradient descent, where the gradient is computed via the reparameterisation trick (Salimans et al., 2013; Kingma and Welling, 2014; Rezende et al., 2014). Unfortunately, the reparameterisation trick is not applicable in our setting since the generation process (1) involves sampling discrete variable thus nondifferentiable. An initial idea to address this issue would be to use the REINFORCE gradient estimator (Williams, 1992) for the generative model loss (3):
(4) 
with shorthands the score function of . Here the expectation term is often called the “reward” of generating . Intuitively, the gradient ascent update using (4) would encourage the generator to generate with high reward, thus fooling the discriminator. But again REINFORCE is not directly applicable as the score function is intractable (the distribution is implicitly defined by neural network transformations of noise variables). A second attempt for gradient approximation would replace the discrete sampling step with continuous value approximations (Jang et al., 2017; Maddison et al., 2017), with the hope that this enables the usage of the reparameterisation trick. However, the offtheshelf renderer is treated as blackbox so we do not assume the backpropagation mechanism is implemented there. Even worse, the discrete variable sampling step is necessary as offtheshelf renderer can only work with discrete voxel maps. Therefore the instancelevel gradient is not defined on the discrete voxel grid, and gradient approximation methods based on continuous relaxations cannot be applied neither.
To address the nondifferentiability issues, we introduce a proxy neural renderer as a pathway for backpropagation in generator training. In detail, we define a neural renderer which directly renders the continuous voxel representation into the 2D image . To encourage realistic renderings that are closed to the results from the offtheshelf renderer, the neural renderer is trained to minimise the error of rendering on discrete voxels:
(5) 
If the neural renderer matches closely with the offtheshelf renderer on rendering discrete voxel grids, then we can replace the nondifferentiable renderer in (4) with the neural renderer :
(6) 
The intractability of remains to be addressed. Notice that the neural renderer can take in both discrete and continuous voxel grids as inputs, therefore the instancelevel gradient is welldefined and computable for both and . This motivates the “reward approximation” approach which sidesteps the intractability of via the reparameterisation trick:
(7) 
The reward approximation quality is key to the performance of generative model, as gradient ascent using (7) would encourage the generator to create continuous voxel grids for which the neural renderer would return realistic rendering results. Notice the neural renderer is free to render arbitrary outcomes for continuous voxel grids if it is only trained by minimising on discrete voxel grids. Therefore is not required to resemble the desired 3D shape in order to produce satisfactory neural rendering results. Since the 3D generative model typically creates nondiscrete voxel grids, the generated 2D images using the offtheshelf renderer will match poorly to the training images. We address this issue by training the neural renderer with a novel loss function which we call discriminator output matching (DOM). Define , the DOM loss is
(8) 
Using neural networks of enough capacity, the optimal neural renderer achieves perfect reward approximation, i.e. . This approach forces the neural renderer to preserve the population statistics of the discrete rendered images defined by the discriminator. Therefore to fool the discriminator, the 3D generative model must generate continuous voxel grids which correspond to meaningful representations of the underlying 3D shapes. In practice the neural renderer is trained using a combination of the two loss functions
(9) 
We name the proposed proposed method inverse graphics GAN (IGGAN), as the neural renderer in backpropagation time “inverts” the offtheshelf renderer and provide useful gradients for the 3D generative model training. The model is also visualised in Figure 2, which is trained in an endtoend fashion. To speed up the generative model training, the neural renderer can be pretrained on a generic data set, like tables or cubes, that can differ significantly from the 2D images that the generative model is eventually trained on.


# of Images  One Per Model ()  Unlimited  
Dataset  Tubs  Couches  Chairs  Tubs  Couches  Chairs  Tubs  Couches  Chairs  LVP 


2DDCGAN  ^{2}^{2}2A fair comparison to DCGAN is impossible, as the generator is trained on LVP data, but FID evaluations use a 360 view.  
Visual Hull  
Absorbtion Only  


IGGAN (Ours)  

Our rendering engine is based on the Pyrender (Matl, ) which is built on top of OpenGL.
We employ a 3D convolutional GAN architecture for the generator (Wu et al., 2016) with a voxel resolution. To incorporate the viewpoint, the rigid body transformation embeds the grid into a resolution. We render the volumes with a RenderNet (NguyenPhuoc et al., 2018) architecture, with the modification of using 2 residual blocks in 3D and 4 residual blocks in 2D only. The discriminator architecture follows the design in DCGAN (Radford et al., 2016), taking images of resolution. Additionally, we add spectral normalization to the discriminator (Miyato et al., 2018) to stablize training.
We employ a 1:1:1 updating scheme for generator, discriminator and the neural renderer, using learning rates of for the neural renderer and for both generator and discriminator. The Discriminator Output Matching loss is weighted by over the loss, as in (9). We found that training was stable against changes in and extensive tuning was not necessary. The binerization distribution was chosen as a global thresholding, with the threshold being distributed uniformly in .
We evaluate our model on a variety of synthetic datasets generated from 3D models in the ShapeNet (Chang et al., 2015) database as well as on a real data set consisting of chanterelle mushrooms, introduced in (Henzler et al., 2019).
We synthesize images from three different categories of ShapeNet objects, Chairs, Couches and Bathtubs. For each of these categories, we generate three different datasets:
500: A very small dataset consisting of 500 images taken from 500 different objects in the corresponding data set, each rendered from a single viewpoint.
One per Model: A more extensive one where we sample each object in the 3D data set once from a single viewpoint for the Chairs and Couches data, and from four viewpoints for Bathtubs, due to the small number of objects in this category. This results in data sets containing 6777 chairs, 3173 couches and 3424 images of bathtubs.
Unlimited:
Finally, we render each objects from a different viewpoint throughout each training epoch. This leads to a theoretically unlimited data set of images. However, all images are generated from the limited number of objects contained in the corresponding ShapeNet category.
In order to render the ShapeNet volumes, we light them from two beam light sources that illuminate the object from 45 below the camera and 45 to its left and right. The camera always points directly at the object, and we uniformly sample from all 360 of rotation, with an elevation between 15 and 60 degrees for bathtubs and couches and 30 and 60 degrees for chairs. The limitation on elevation angle is chosen to generate a dataset that is as realistic as possible, given the object category.
LVP: We also consider a limited viewpoint (LVP) setting for the chairs dataset where the azimuth rotation is limited to 60 in each direction from center, and elevation is limited to be between 15 ad 60 degrees. This setting is meant to further simulate the viewpoint bias observed in natural photographs.
Chanterelle Mushrooms: We prepare the Chanterelle data by cropping and resizing the images to
resolution and by unifying the mean intensity in an additive way. Note that the background of the natural images has been masked out in the original opensourced dataset.
We compare to the following stateoftheart methods for learning 3D voxel generative models from 2D data:
Visual Hull: This is the model from Gadelha et al. (2017) which learned from a smoothed version of object silhouettes.
Absorbtion Only: This is the model from Henzler et al. (2019) which assumes voxels absorb light based on their fraction of occupancy.
2DDCGAN: For comparison we also show results from a DCGAN (Radford et al., 2016) trained on 2D images only. While this baseline is solving a different task and is not able to produce any 3D objects, it allows a comparison of the generated image quality.
We show results from our reimplementation of these models because we found Gadelha et al. (2017) performed better by using recently developed GAN stabalization techniques, i.e. spectral normalization (Miyato et al., 2018), and the code for Henzler et al. (2019) was not available at the time we ran our original results. A discussion of the EmissionAbsorption model also proposed in Henzler et al. (2019) is provided in the supplemental material ^{1}^{1}1https://lunzs.github.io/iggan/iggan_supplemental.pdf. To provide the most favorable comparison for the baselines, each baseline is trained on a dataset of images synthesized from ShapeNet using the respective choice of rendering (Visual Hull or AbsorbtionOnly) from the 3D objects.
We chose to evaluate the quality of the generated 3D models by rendering them to 2D images and computing Fréchet Inception Distances (FIDs) (Heusel et al., 2017). This focuses the evaluation on the observed visual quality, preventing if from considering what the model generates in the unobserved insides of the objects. All FID scores reported in the main paper use an Inception network (Szegedy et al., 2016)
retrained to classify Images generated with our renderer because we found this to better align with our sense of the visual quality given the domain gap between ImageNet and our setting. We have trained the Inception network to classify rendered images of the 21 largest object classes in ShapeNet, achieving 95.3% accuracy. In the supplemental material we show FID scores using the traditional Inception network trained on ImageNet
(Deng et al., 2009), and qualitatively they are very similar.We can see clearly from Table 1 that our approach (IGGAN) significantly outperforms the baselines on all dasasets. The largest of these gains is obtained on the data sets containing many concavities, like couches and bathtubs. Furthermore, the advantage of the proposed method becomes more significant when the dataset is smaller and the viewpoint is more restrictive. Since our method can more easily take advantage of the lighting and shading cue provided by the images, we believe it can extract more meaningful information per training sample, hence producing better results in these settings. On the Unlimited dataset the baseline methods seem to be able to mitigate some of their disadvantage by simply seeing enough views of each training model, but still our approach generates considerably better FID scores even in this setting. We note that the very poor results from the 2DDCGAN stem from computing FIDs using our retrained Inception network, which seems to easily pick up on any unrealistic artifacts generated by the GAN. The supplemental materials show the FID scores using the standard Inception net, and while IGGAN still outperforms the 2DDCGAN considerably, the resulting scores are closer.
From Figure 3 we can see that IGGAN produces highquality samples on all three object categories. Furthermore, we can see from Figure 6 that the generated 3D shapes are superior to the baselines. This is particularly evident in the context of concave objects like bathtubs or couches. Here, generative models based on visual hull or absorption rendering fail to take advantage of the shading cues needed to detect the hollow space inside the object, leading to e.g. seemingly filled bathtubs. The shortcoming of the baseline models has already been noticed by Gadelha et al. (2017). Our approach, on the other hand, successfully detects the interior structure of concave objects using the differences in light exposures between surfaces, enabling it to accurately capture concavities and hollow spaces.
On the chair dataset, the advantages of our proposed method are evident on flat surfaces. Any uneven surfaces generated by mistake are promptly detected by our discriminator which can easily differences in light exposure, forcing the generator to produce clean and flat surfaces. The baseline methods however are unable to render such impurities in a way that is evident to the discriminator, leading to generated samples with grainy and uneven surfaces. This effect is most obvious on the chairs with limited views (LVP), as shown in Figure 4. A large selection of randomly generated samples from all methods can be found in the supplemental material.





We study the effect of the proposed discriminator output matching (DOM) loss in various scenarios. In Table 5.3, we report the FID scores on the models trained without the DOM loss, from this comparison we see that the DOM loss plays a crucial role in learning to generate highquality 3D objects. In Figure 7 we can see that that the nonbinary volumes sampled from the generator can be rendered to a variety of different images by OpenGL, depending on the random choice of threshold. Without the DOM loss, the trained neural renderer simply averages over these potential outcomes, considerably smoothing the result in the process and losing information about fine structures in the volume. This leads to weak gradients being passed to the generator, considerably deteriorating sample quality.
OpenGL  RenderNet  
Retrained  86.4/180.1  74.7/144.8 
Fixed  113.7/323.5  103.9/124.6 
Another setting from Table 5.3 shows the discriminator trained using generated samples rendered by the neural renderer instead of OpenGL. This inherently prevents the mode collapse observed in the above setting. However, it leads to the generator being forced by the discriminator to produce binary voxel maps early on in training. This seems to lead to the generator getting stuck in a local optima, hence deteriorating sample quality.
0.1 
0.2 
0.5 
0.8 

We investigate the effect of various pretrainings of the neural renderer. All other experiments were conducted with the neural renderer pretrained on the Tables data from ShapeNet (see Table 1). As a comparison, we run the proposed algorithm on the chair data using a neural renderer pretrained on either the Chair data itself or a simple data set consisting of randomly sampled cubes. As shown in Table 5.3, the quality of the results produced by our method is robust to changes in the pretraining of the neural renderer. In contrast, if we use a fixed pretrained renderer it produces reasonable results if pretrained directly on the domain of interest, but deteriorates significantly if trained only on a related domain. Note that we assume no access to 3D data in the domain of interest so in practice we cannot pretrain in this way.
Chairs  Tables  Random  
Ours  22.6  20.7  20.4 
Fixed  37.8  105.8  141.9 
We have presented the first scalable algorithm using an arbitrary offtheshelf renderer to learn to generate 3D shapes from unstructured 2D data. We have introduced a novel loss term, Discriminator Output Matching, that allows stably training our model on a variety of datasets, thus achieving significantly better FID scores than previous work. In particular, Using light exposure and shadow information in our rendering engine, we are able to generate highquality convex shapes, like bathtubs and couches, that prior work failed to capture.
The presented framework is general and inconcept can be make to work with any offtheshelf renderer. In practice, our work can be extend by using more sophisticated photorealistic rendering engines, to be able to learn even more detailed information about the 3D world from images. By incorporating color, material and lighting prediction into our model we hope to be able to extend it to work with more general real world datasets.
Generative and discriminative voxel modeling with convolutional neural networks
. arXiv preprint arXiv:1608.04236. Cited by: §2.Neural scene representation and rendering
. Science 360 (6394), pp. 1204–1210. Cited by: §2.Proceedings of the IEEE conference on computer vision and pattern recognition
, pp. 605–613. Cited by: §2.Towards unsupervised learning of generative models for 3d controllable image synthesis
. arXiv preprint arXiv:1912.05237. Cited by: §2.The concrete distribution: a continuous relaxation of discrete random variables
. In International Conference on Learning Representations, Cited by: §2, §3.Endtoend 6dof object pose estimation through differentiable rasterization
. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 0–0. Cited by: §2.Stochastic backpropagation and approximate inference in deep generative models
. In Proceedings of the 31st International Conference on International Conference on Machine Learning  Volume 32, ICML’14, pp. II–1278–II–1286. Cited by: §3.Fixedform variational posterior approximation through stochastic linear regression
. Bayesian Analysis 8 (4), pp. 837–882. Cited by: §3.Rebar: lowvariance, unbiased gradient estimates for discrete latent variable models
. In Advances in Neural Information Processing Systems, pp. 2627–2636. Cited by: §2.Simple statistical gradientfollowing algorithms for connectionist reinforcement learning
. Machine learning 8 (34), pp. 229–256. Cited by: §2, §3.
Comments
There are no comments yet.