-
DIST: Rendering Deep Implicit Signed Distance Function with Differentiable Sphere Tracing
We propose a differentiable sphere tracing algorithm to bridge the gap b...
read it
-
Dressing 3D Humans using a Conditional Mesh-VAE-GAN
Three-dimensional human body models are widely used in the analysis of h...
read it
-
NeMo: Neural Mesh Models of Contrastive Features for Robust 3D Pose Estimation
3D pose estimation is a challenging but important task in computer visio...
read it
-
Inverse Graphics GAN: Learning to Generate 3D Shapes from Unstructured 2D Data
Recent work has shown the ability to learn generative models for 3D shap...
read it
-
Soft Rasterizer: Differentiable Rendering for Unsupervised Single-View Mesh Reconstruction
Rendering is the process of generating 2D images from 3D assets, simulat...
read it
-
Inverse Graphics with Probabilistic CAD Models
Recently, multiple formulations of vision problems as probabilistic inve...
read it
-
Perceptual Rasterization for Head-mounted Display Image Synthesis
We suggest a rasterization pipeline tailored towards the need of head-mo...
read it
SMPLpix: Neural Avatars from 3D Human Models
Recent advances in deep generative models have led to an unprecedented level of realism for synthetically generated images of humans. However, one of the remaining fundamental limitations of these models is the ability to flexibly control the generative process, e.g. change the camera and human pose while retaining the subject identity. At the same time, deformable human body models like SMPL and its successors provide full control over pose and shape, but rely on classic computer graphics pipelines for rendering. Such rendering pipelines require explicit mesh rasterization that (a) does not have the potential to fix artifacts or lack of realism in the original 3D geometry and (b) until recently, were not fully incorporated into deep learning frameworks. In this work, we propose to bridge the gap between classic geometry-based rendering and the latest generative networks operating in pixel space by introducing a neural rasterizer, a trainable neural network module that directly "renders" a sparse set of 3D mesh vertices as photorealistic images, avoiding any hardwired logic in pixel colouring and occlusion reasoning. We train our model on a large corpus of human 3D models and corresponding real photos, and show the advantage over conventional differentiable renderers both in terms of the level of photorealism and rendering efficiency.
READ FULL TEXT
Comments
There are no comments yet.