Deferred Neural Rendering: Image Synthesis using Neural Textures

04/28/2019
by   Justus Thies, et al.
6

The modern computer graphics pipeline can synthesize images at remarkable visual quality; however, it requires well-defined, high-quality 3D content as input. In this work, we explore the use of imperfect 3D content, for instance, obtained from photo-metric reconstructions with noisy and incomplete surface geometry, while still aiming to produce photo-realistic (re-)renderings. To address this challenging problem, we introduce Deferred Neural Rendering, a new paradigm for image synthesis that combines the traditional graphics pipeline with learnable components. Specifically, we propose Neural Textures, which are learned feature maps that are trained as part of the scene capture process. Similar to traditional textures, neural textures are stored as maps on top of 3D mesh proxies; however, the high-dimensional feature maps contain significantly more information, which can be interpreted by our new deferred neural rendering pipeline. Both neural textures and deferred neural renderer are trained end-to-end, enabling us to synthesize photo-realistic images even when the original 3D content was imperfect. In contrast to traditional, black-box 2D generative neural networks, our 3D representation gives us explicit control over the generated output, and allows for a wide range of application domains. For instance, we can synthesize temporally-consistent video re-renderings of recorded 3D scenes as our representation is inherently embedded in 3D space. This way, neural textures can be utilized to coherently re-render or manipulate existing video content in both static and dynamic environments at real-time rates. We show the effectiveness of our approach in several experiments on novel view synthesis, scene editing, and facial reenactment, and compare to state-of-the-art approaches that leverage the standard graphics pipeline as well as conventional generative neural networks.

READ FULL TEXT

page 1

page 3

page 5

page 7

page 8

page 9

page 10

page 11

research
07/30/2022

MobileNeRF: Exploiting the Polygon Rasterization Pipeline for Efficient Neural Field Rendering on Mobile Architectures

Neural Radiance Fields (NeRFs) have demonstrated amazing ability to synt...
research
04/08/2020

State of the Art on Neural Rendering

Efficient rendering of photo-realistic virtual worlds is a long standing...
research
04/15/2020

Intuitive, Interactive Beard and Hair Synthesis with Generative Models

We present an interactive approach to synthesizing realistic variations ...
research
01/14/2020

Neural Human Video Rendering by Learning Dynamic Textures and Rendering-to-Video Translation

Synthesizing realistic videos of humans using neural networks has been a...
research
01/14/2020

Neural Human Video Rendering: Joint Learning of Dynamic Textures and Rendering-to-Video Translation

Synthesizing realistic videos of humans using neural networks has been a...
research
06/14/2013

Visualizing Astronomical Data with Blender

Astronomical data take on a multitude of forms -- catalogs, data cubes, ...
research
09/12/2018

Geometric Image Synthesis

The task of generating natural images from 3D scenes has been a long sta...

Please sign up or login with your details

Forgot password? Click here to reset