State of the Art on Neural Rendering

by   Ayush Tewari, et al.

Efficient rendering of photo-realistic virtual worlds is a long standing effort of computer graphics. Modern graphics techniques have succeeded in synthesizing photo-realistic images from hand-crafted scene representations. However, the automatic generation of shape, materials, lighting, and other aspects of scenes remains a challenging problem that, if solved, would make photo-realistic computer graphics more widely accessible. Concurrently, progress in computer vision and machine learning have given rise to a new approach to image synthesis and editing, namely deep generative models. Neural rendering is a new and rapidly emerging field that combines generative machine learning techniques with physical knowledge from computer graphics, e.g., by the integration of differentiable rendering into network training. With a plethora of applications in computer graphics and vision, neural rendering is poised to become a new area in the graphics community, yet no survey of this emerging field exists. This state-of-the-art report summarizes the recent trends and applications of neural rendering. We focus on approaches that combine classic computer graphics techniques with deep generative models to obtain controllable and photo-realistic outputs. Starting with an overview of the underlying computer graphics and machine learning concepts, we discuss critical aspects of neural rendering approaches. This state-of-the-art report is focused on the many important use cases for the described algorithms such as novel view synthesis, semantic photo manipulation, facial and body reenactment, relighting, free-viewpoint video, and the creation of photo-realistic avatars for virtual and augmented reality telepresence. Finally, we conclude with a discussion of the social implications of such technology and investigate open research problems.


page 1

page 8

page 10

page 12

page 14

page 15

page 16

page 18


Deep Graphics Encoder for Real-Time Video Makeup Synthesis from Example

While makeup virtual-try-on is now widespread, parametrizing a computer ...

Zahir: a Object-Oriented Framework for Computer Graphics

In this article we present Zahir, a framework for experimentation in Com...

A Novel Method for Vectorization

Vectorization of images is a key concern uniting computer graphics and c...

FC2T2: The Fast Continuous Convolutional Taylor Transform with Applications in Vision and Graphics

Series expansions have been a cornerstone of applied mathematics and eng...

Neural Radiance Transfer Fields for Relightable Novel-view Synthesis with Global Illumination

Given a set of images of a scene, the re-rendering of this scene from no...

Blending Generative Adversarial Image Synthesis with Rendering for Computer Graphics

Conventional computer graphics pipelines require detailed 3D models, mes...

Deferred Neural Rendering: Image Synthesis using Neural Textures

The modern computer graphics pipeline can synthesize images at remarkabl...

Code Repositories


機械学習、バイオインフォマティクスのツールや論文、ニュースなど。最終更新 2020/4/25

view repo

1 Introduction

The creation of photo-realistic imagery of virtual worlds has been one of the primary driving forces for the development of sophisticated computer graphics techniques. Computer graphics approaches span the range from real-time rendering, which enables the latest generation of computer games, to sophisticated global illumination simulation for the creation of photo-realistic digital humans in feature films. In both cases, one of the main bottlenecks is content creation, i.e., that a vast amount of tedious and expensive manual work of skilled artists is required for the creation of the underlying scene representations in terms of surface geometry, appearance/material, light sources, and animations. Concurrently, powerful generative models have emerged in the computer vision and machine learning communities. The seminal work on

Generative Adversarial Neural Networks

(GANs) by Goodfellow et al. [goodfellow2014] has evolved in recent years into deep generative models for the creation of high resolution imagery [radford2016unsupervised, Karras2017ProgressiveGO, brock2019large] and videos [vondrick2016generating, Clark2019]. Here, control over the synthesized content can be achieved by conditioning [pix2pix2016, zhu2017unpaired] the networks on control parameters or images from other domains. Very recently, the two areas have come together and have been explored as “neural rendering”. One of the first publications that used the term neural rendering is Generative Query Network (GQN) [eslami2018neural]. It enables machines to learn to perceive their surroundings based on a representation and generation network. The authors argue that the network has an implicit notion of 3D due to the fact that it could take a varying number of images of the scene as input, and output arbitrary views with correct occlusion. Instead of an implicit notion of 3D, a variety of other methods followed that include this notion of 3D more explicitly, exploiting components of the graphics pipeline.

While classical computer graphics starts from the perspective of physics, by modeling for example geometry, surface properties and cameras, machine learning comes from a statistical perspective, i.e., learning from real world examples to generate new images. To this end, the quality of computer graphics generated imagery relies on the physical correctness of the employed models, while the quality of the machine learning approaches mostly relies on carefully-designed machine learning models and the quality of the used training data. Explicit reconstruction of scene properties is hard and error prone and leads to artifacts in the rendered content. To this end, image-based rendering methods try to overcome these issues, by using simple heuristics to combine captured imagery. But in complex scenery, these methods show artifacts like seams or ghosting. Neural rendering brings the promise of addressing both

reconstruction and rendering by using deep networks to learn complex mappings from captured images to novel images. Neural rendering combines physical knowledge, e.g., mathematical models of projection, with learned components to yield new and powerful algorithms for controllable image generation. Neural rendering has not yet a clear definition in the literature. Here, we define Neural Rendering as:

Deep image or video generation approaches that enable explicit or implicit control of scene properties such as illumination, camera parameters, pose, geometry, appearance, and semantic structure.

This state-of-the-art report defines and classifies the different types of neural rendering approaches. Our discussion focuses on methods that combine computer graphics and learning-based primitives to yield new and powerful algorithms for

controllable image generation, since controllability in the image generation process is essential for many computer graphics applications. One central scheme around which we structure this report is the kind of control afforded by each approach. We start by discussing the fundamental concepts of computer graphics, vision, and machine learning that are prerequisites for neural rendering. Afterwards, we discuss critical aspects of neural rendering approaches, such as: type of control, how the control is provided, which parts of the pipeline are learned, explicit vs. implicit control, generalization, and stochastic vs. deterministic synthesis. Following, we discuss the landscape of applications that is enabled by neural rendering. The applications of neural rendering range from novel view synthesis, semantic photo manipulation, facial and body reenactment, relighting, free-viewpoint video, to the creation of photo-realistic avatars for virtual and augmented reality telepresence Since the creation and manipulation of images that are indistinguishable from real photos has many social implications, especially when humans are photographed, we also discuss these implications and the detectability of synthetic content. As the field of neural rendering is still rapidly evolving, we conclude with current open research problems.

2 Related Surveys and Course Notes

Deep Generative Models have been widely studied in the literature, with several surveys [salakhutdinov2015learning, Oussidi18, ou2018review] and course notes [openAIblog, StanfordCourse, IJCAICourse] describing them. Several reports focus on specific generative models, such as Generative Adversarial Networks (GANs) [wang2019generative, creswell2018generative, goodfellow2016nips, CVPR2018tutorialGAN, Pan2018GANSurvey] and

Variational Autoencoders

(VAEs) [doersch2016tutorial, kingma2019introduction]. Controllable image synthesis using classic computer graphics and vision techniques have also been studied extensively. Image-based rendering has been discussed in several survey reports [shum2000review, zhang2004survey]. The book of Szeliski [szeliski2010computer] gives an excellent introduction to 3D reconstruction and image-based rendering techniques. Recent survey reports [egger20193d, Zollhoefer2018FaceSTAR] discuss approaches for 3D reconstruction and controllable rendering of faces for various applications. Some aspects of neural rendering have been covered in tutorials and workshops of recent computer vision conferences. These include approaches for free viewpoint rendering and relighting of full body performances [ECCV2018tutorialUltra, CVPR2018tutorialUltra, CVPR2019tutorialUltra], tutorials on neural rendering for face synthesis [ECCV2018tutorialFace] and 3D scene generation using neural networks [CVPR2019tutorial3DSene]. However, none of the above surveys and courses provide a structured and comprehensive look into neural rendering and all of its various applications.

3 Scope of this STAR

In this state-of-the-art report, we focus on novel approaches that combine classical computer graphics pipelines and learnable components. Specifically, we are discussing where and how classical rendering pipelines can be improved by machine learning and which data is required for training. To give a comprehensive overview, we also give a short introduction to the pertinent fundamentals of both fields, i.e., computer graphics and machine learning. The benefits of the current hybrids are shown, as well as their limitations. This report also discusses novel applications that are empowered by these techniques. We focus on techniques with the primary goal of generating controllable photo-realistic imagery via machine learning. We do not cover work on geometric and 3D deep learning

[mescheder2019occupancy, saito2019pifu, qi2016pointnet, choy20163d, park2019deepsdf]

, which is more focused on 3D reconstruction and scene understanding. This branch of work is highly inspiring for many neural rendering approaches, especially ones that are based on 3D-structured scene representations, but goes beyond the scope of this survey. We are also not focused on techniques that employ machine learning for denoising raytraced imagery

[Chaitanya:2017, LBF].

4 Theoretical Fundamentals

In the following, we discuss theoretical fundamentals of work in the neural rendering space. First, we discuss image formation models in computer graphics, followed by classic image synthesis methods. Next, we discuss approaches to generative models in deep learning.

4.1 Physical Image Formation

Classical computer graphics methods approximate the physical process of image formation in the real world: light sources emit photons that interact with the objects in the scene, as a function of their geometry and material properties, before being recorded by a camera. This process is known as light transport. Camera optics acquire and focus incoming light from an aperture onto a sensor or film plane inside the camera body. The sensor or film records the amount of incident light on that plane, sometimes in a nonlinear fashion. All the components of image formation—light sources, material properties, and camera sensors—are wavelength-dependent. Real films and sensors often record only one to three different wavelength distributions, tuned to the sensitivity of the human visual system. All the steps of this physical image formation are modelled in computer graphics: light sources, scene geometry, material properties, light transport, optics, and sensor behavior.

4.1.1 Scene Representations

To model objects in a scene, many different representations for scene geometry have been proposed. They can be classified into explicit and implicit representations. Explicit methods describe scenes as a collection of geometric primitives, such as triangles, point-like primitives, or higher-order parametric surfaces. Implicit representations include signed distance functions mapping from , such that the surface is defined as the zero-crossing of the function (or any other level-set). In practice, most hardware and software renderers are tuned to work best on triangle meshes, and will convert other representations into triangles for rendering.

The interactions of light with scene surfaces depend on the material properties of the surfaces. Materials may be represented as bidirectional reflectance distribution functions (BRDFs) or bidirectional subsurface scattering reflectance distribution functions (BSSRDFs). A BRDF is a 5-dimensional function that describes how much light of a given wavelength incident on a surface point from each incoming ray direction is reflected toward each exiting ray direction. While a BRDF only models light interactions that happen at a single surface point, a BSSDRF models how light incident on one surface point is reflected at a different surface point, thus making it a 7-D function. BRDFs can be represented using analytical models [phong1975phong, cook1982reflectance, oren1995rough] or measured data [matusik2003brdf]. When a BRDF changes across a surface, it is referred to as a spatially-varying BRDF (svBRDF). Spatially varying behavior across geometry may be represented by binding discrete materials to different geometric primitives, or via the use of texture mapping. A texture map defines a set of continuous values of a material parameter, such as diffuse albedo, from a 2- or 3-dimensional domain onto a surface. 3-dimensional textures represent the value throughout a bounded region of space and can be applied to either explicit or implicit geometry. 2-dimensional textures map from a 2-dimensional domain onto a parametric surface; thus, they are typically applicable only to explicit geometry.

Sources of light in a scene can be represented using parametric models; these include point or directional lights, or area sources that are represented by surfaces in the scene that emit light. Some methods account for continuously varying emission over a surface, defined by a texture map or function. Often environment maps are used to represent dense, distant scene lighting. These environment maps can be stored as non-parametric textures on a sphere or cube, or can be approximated by coefficients of a spherical harmonic basis 

[Muller1966]. Any of the parameters of a scene might be modeled as varying over time, allowing both animation across successive frames, and simulations of motion blur within a single frame.

4.1.2 Camera Models

The most common camera model in computer graphics is the pinhole camera model, in which rays of light pass through a pinhole and hit a film plane (image plane). Such a camera can be parameterized by the pinhole’s 3D location, the image plane, and a rectangular region in that plane representing the spatial extent of the sensor or film. The operation of such a camera can be represented compactly using projective geometry, which converts 3D geometric representations using homogeneous coordinates into the two-dimensional domain of the image plane. This is also known as a full perspective projection model. Approximations of this model such as the weak perspective projection are often used in computer vision to reduce complexity because of the non-linearity of the full perspective projection. More accurate projection models in computer graphics take into account the effects of non-ideal lenses, including distortion, aberration, vignetting, defocus blur, and even the inter-reflections between lens elements [Sturm:2011].

4.1.3 Classical Rendering

The process of transforming a scene definition including cameras, lights, surface geometry and material into a simulated camera image is known as rendering. The two most common approaches to rendering are rasterization and raytracing: Rasterization is a feed-forward process in which geometry is transformed into the image domain, sometimes in back-to-front order known as painter’s algorithm. Raytracing is a process in which rays are cast backwards from the image pixels into a virtual scene, and reflections and refractions are simulated by recursively casting new rays from the intersections with the geometry [Whitted1980].

Hardware-accelerated rendering typically relies on rasterization, because it has good memory coherence. However, many real-world image effects such as global illumination and other forms of complex light transport, depth of field, motion blur, etc. are more easily simulated using raytracing, and recent GPUs now feature acceleration structures to enable certain uses of raytracing in real-time graphics pipelines (e.g., NVIDIA RTX or DirectX Raytracing [Haines2019]). Although rasterization requires an explicit geometric representation, raytracing/raycasting can also be applied to implicit representations. In practice, implicit representations can also be converted to explicit forms for rasterization using the marching cubes algorithm [Lorensen1987] and other similar methods. Renderers can also use combinations of rasterization and raycasting to obtain high efficiency and physical realism at the same time (e.g., screen space ray-tracing [McGuire2014RayTrace]

). The quality of images produced by a given rendering pipeline depends heavily on the accuracy of the different models in the pipeline. The components must account for the discrete nature of computer simulation, such as the gaps between pixel centers, using careful application of sampling and signal reconstruction theory. The process of estimating the different model parameters (camera, geometry, material, light parameters) from real-world data, for the purpose of generating novel views, editing materials or illumination, or creating new animations is known as

inverse rendering. Inverse rendering [marschner1998inverse, DADDB19, henzler2018escaping, deschaintre2018single, li2018materials], which has been explored in the context of both computer vision and computer graphics, is closely related to neural rendering. A drawback of inverse rendering is that the predefined physical model or data structures used in classical rendering don’t always accurately reproduce all the features of real-world physical processes, due to either mathematical complexity or computational expense. In contrast, neural rendering introduces learned components into the rendering pipeline in place of such models. Deep neural nets can statistically approximate such physical processes, resulting in outputs that more closely match the training data, reproducing some real-world effects more accurately than inverse rendering.

Note that there are approaches at the intersection of inverse rendering and neural rendering. E.g., Li et al. [li2018learning] uses a neural renderer that approximates global illumination effects to efficiently train an inverse rendering method that predicts depth, normal, albedo and roughness maps. There are also approaches that use neural networks to enhance specific building blocks of the classical rendering pipeline, e.g., shaders. Rainer et al. [Rainer19Neural] learn Bidirectional Texture Functions and Maximov et al. [Maximov_2019_ICCV] learn Appearance Maps.

4.1.4 Light Transport

Light transport considers all the possible paths of light from the emitting light sources, through a scene, and onto a camera. A well-known formulation of this problem is the classical rendering equation [kajiya1986rendering]:

where represents outgoing radiance from a surface as a function of location, ray direction, wavelength, and time. The term represents direct surface emission, and the term represents the interaction of incident light with surface reflectance:

Note that this formulation omits consideration of transparent objects and any effects of subsurface or volumetric scattering. The rendering equation is an integral equation, and cannot be solved in closed form for nontrivial scenes, because the incident radiance appearing on the right hand side is the same as the outgoing radiance from another surface on the same ray. Therefore, a vast number of approximations have been developed. The most accurate approximations employ Monte Carlo simulations[veach1998thesis], sampling ray paths through a scene. Faster approximations might expand the right hand side one or two times and then truncate the recurrence, thereby simulating only a few “bounces” of light. Computer graphics artists may also simulate additional bounces by adding non-physically based light sources to the scene.

4.1.5 Image-based Rendering

In contrast to classical rendering, which projects 3D content to the 2D plane, image-based rendering techniques generate novel images by transforming an existing set of images, typically by warping and compositing them together. Image-based rendering can handle animation, as shown by Thies et al. [thies2018headon], but the most common use-case is novel view synthesis of static objects, in which image content from captured views are warped into a novel view based on a proxy geometry and estimated camera poses [Debevec1998, Gortler:1996:LUM:237170.237200, InsideOut2016]. To generate a complete new image, multiple captured views have to be warped into the target view, requiring a blending stage. The resulting image quality depends on the quality of the geometry, the number and arrangement of input views, and the material properties of the scene, since some materials change appearance dramatically across viewpoints. Although heuristic methods for blending and the correction of view-dependent effects [InsideOut2016] show good results, recent research has substituted parts of these image-based rendering pipelines with learned components. Deep neural networks have successfully been employed to reduce both blending artifacts [Hedman:2018:DBF:3272127.3275084] and artifacts that stem from view-dependent effects [thies2018ignor] (Section 6.2.1).

4.2 Deep Generative Models

While traditional computer graphics methods focus on physically modeling scenes and simulating light transport to generate images, machine learning can be employed to tackle this problem from a statistical standpoint, by learning the distribution of real world imagery. Compared to classical image-based rendering, which historically has used small sets of images (e.g., hundreds), deep generative models can learn image priors from large-scale image collections.

Seminal work on deep generative models [ackley1985learning, hinton2006reducing, salakhutdinov2009deep] learned to generate random samples of simple digits and frontal faces. In these early results, both the quality and resolution was far from that achievable using physically-based rendering techniques. However, more recently, photo-realistic image synthesis has been demonstrated using Generative Adversarial Networks (GANs) [goodfellow2014] and its extensions. Recent work can synthesize random high-resolution portraits that are often indistinguishable from real faces [karras2019style].

Deep generative models excel at generating random realistic images with statistics resembling the training set. However, user control and interactivity play a key role in image synthesis and manipulation [barnes2009patchmatch]

. For example, concept artists want to create particular scenes that reflect their design ideas rather than random scenes. Therefore, for computer graphics applications, generative models need to be extended to a conditional setting to gain explicit control of the image synthesis process. Early work trained feed-forward neural networks with a per-pixel

distance to generate images given conditional inputs [dosovitskiy2015learning]. However, the generated results are often blurry as distance in pixel space considers each pixel independently and ignores the complexity of visual structure [pix2pix2016, blau2018perception]. Besides, it tends to average multiple possible outputs. To address the above issue, recent work proposes perceptual similarity distances [gatys2016image, dosovitskiy2016generating, johnson2016perceptual]

to measure the discrepancy between synthesized results and ground truth outputs in a high-level deep feature embedding space constructed by a pre-trained network. Applications include artistic stylization 

[gatys2016image, johnson2016perceptual], image generation and synthesis [dosovitskiy2016generating, chen2017photographic]

, and super-resolution 

[johnson2016perceptual, ledig2017photo]. Matching an output to its ground truth image does not guarantee that the output looks natural [blau2018perception]. Instead of minimizing the distance between outputs and targets, conditional GANs (cGANs) aim to match the conditional distribution of outputs given inputs [MirzaO2014, pix2pix2016]. The results may not look the same as the ground truth images, but they look natural. Conditional GANs have been employed to bridge the gap between coarse computer graphics renderings and the corresponding real-world images [bi2019cg2real, zhu2017unpaired], or to produce a realistic image given a user-specified semantic layout [pix2pix2016, park2019SPADE]. Below we provide more technical details for both network architectures and learning objectives.

4.2.1 Learning a Generator

We aim to learn a neural network that can map a conditional input to an output . Here and denote the input and output domains. We call this neural network generator. The conditional input can take on a variety of forms depending on the targeted application, such as a user-provided sketch image, camera parameters, lighting conditions, scene attributes, textual descriptions, among others. The output can also vary, from an image, a video, to 3D data such as voxels or meshes. See Table 1 for a complete list of possible network inputs and outputs for each application.

Here we describe three commonly-used generator architectures. Readers are encouraged to check application-specific details in Section 6. (1) Fully Convolutional Networks (FCNs) [matan1992multi, long2015fully] are a family of models that can take an input image with arbitrary size and predict an output with the same size. Compared to popular image classification networks such as AlexNet [krizhevsky2012imagenet] and VGG [simonyan2015very]

that map an image into a vector, FCNs use fractionally-strided convolutions to preserve the spatial image resolution 

[zeiler2010deconvolutional]. Although originally designed for recognition tasks such as semantic segmentation and object detection, FCNs have been widely used for many image synthesis tasks. (2) U-Net [10.1007/978-3-319-24574-4_28] is an FCN-based architecture with improved localization ability. The model adds so called “skip connections” from high-resolution feature maps at early layers to upsampled features in late-stage layers. These skip connections help to produce more detailed outputs, since high-frequency information from the input can be passed directly to the output. (3) ResNet-based generators use residual blocks  [he2016deep] to pass the high-frequency information from input to output, and have been used in style transfer [johnson2016perceptual] and image super-resolution [ledig2017photo].

4.2.2 Learning using Perceptual Distance

Once we collect many input-output pairs and choose a generator architecture, how can we learn a generator to produce a desired output given an input? What would be an effective objective function for this learning problem? One straightforward way is to cast it as a regression problem, and to minimize the distance between the output and its ground truth image , as follows:



denotes the expectation of the loss function over training pairs

, and denotes the p-norm. Common choices include - or

-loss. Unfortunately, the learned generator tends to synthesize blurry images or average results over multiple plausible outputs. For example, in image colorization, the learned generator sometimes produces desaturated results due to the averaging effect 

[zhang2016colorful]. In image super-resolution, the generator fails to synthesize structures and details as the p-norm looks at each pixel independently [johnson2016perceptual].

To design a learning objective that better aligns with human’s perception of image similarity, recent work 

[gatys2016image, johnson2016perceptual, dosovitskiy2016generating] proposes measuring the distance between deep feature representations extracted by a pre-trained image classifier (e.g., VGG network [simonyan2015very]). Such a loss is advantageous over the -norm, as the deep representation summarizes an entire image holistically, while the -norm evaluates the quality of each pixel independently. Mathematically, a generator is trained to minimize the following feature matching objective.


where denotes the feature extractor in the -th layer of the pre-trained network with layers in total and denoting the total number of features in layer . The hyper-parameter denotes the weight for each layer. Though the above distance is often coined “perceptual distance”, it is intriguing why matching statistics in multi-level deep feature space can match human’s perception and help synthesize higher-quality results, as the networks were originally trained for image classification tasks rather than image synthesis tasks. A recent study [zhang2018unreasonable] suggests that rich features learned by strong classifiers also provide useful representations for human perceptual tasks, outperforming classic hand-crafted perceptual metrics [wang2004image, wang2003multiscale].

4.2.3 Learning with Conditional GANs

However, minimizing distances between output and ground truth does not guarantee realistic looking output, according to the work of Blau and Michaeli [blau2018perception]

. They also prove that the small distance and photorealism are at odds with each other. Therefore, instead of distance minimization, deep generative models focus on distribution matching, i.e., matching the distribution of generated results to the distribution of training data. Among many types of generative models,

Generative Adversarial Networks (GANs) have shown promising results for many computer graphics tasks. In the original work of Goodfellow et al. [goodfellow2014], a GAN generator learns a mapping from a low-dimensional random vector to an output image

. Typically, the input vector is sampled from a multivariate Gaussian or Uniform distribution. The generator

is trained to produce outputs that cannot be distinguished from “real” images by an adversarially trained discriminator, . The discriminator is trained to detect synthetic images generated by the generator. While GANs trained for object categories like faces or vehicles learn to synthesize high-quality instances of the object, usually the synthesized background is of a lower quality [karras2019style, Karras2017ProgressiveGO]. Recent papers [Shaham_2019_ICCV, Ashual_2019_ICCV] try to alleviate this problem by learning generative models of a complete scene.

To add conditional information as input, conditional GANs (cGANs) [MirzaO2014, pix2pix2016] learn a mapping from an observed input and a randomly sampled vector to an output image . The observed input is also passed to the discriminator, which models whether image pairs are real or fake. As mentioned before, both input and output vary according to the targeted application. In class-conditional GANs [MirzaO2014], the input is a categorical label that controls which object category a model should generate. In the case of image-conditional GANs such as pix2pix [pix2pix2016], the generator aims to translate an input image , for example a semantic label map, to a realistic-looking output image, while the discriminator aims to distinguish real images from generated ones. The model is trained with paired dataset that consists of pairs of corresponding input and output images . cGANs match the conditional distribution of the output given an input via the following minimax game:


Here, the objective function is normally defined as:


In early cGAN implementations [pix2pix2016, zhu2017unpaired], no noise vector is injected, and the mapping is deterministic, as it tends to be ignored by the network during training. More recent work uses latent vectors to enable multimodal image synthesis [zhu2017toward, huang2018multimodal, almahairi2018augmented]. To stabilize training, cGANs-based methods [pix2pix2016, wang2018pix2pixHD] also adopt per-pixel -loss (Equation 1) and perceptual distance loss (Equation 2).

During training, the discriminator tries to improve its ability to tell real and synthetic images apart, while the generator , at the same time, tries to improve its capability of fooling the discriminator. The pix2pix method adopts a U-Net [10.1007/978-3-319-24574-4_28] as the architecture of the generator and a patch-based fully convolutional network (FCN)  [long2015fully] as the discriminator.

Conceptually, perceptual distance and Conditional GANs are related, as both of them use an auxiliary network (either or ) to define an effective learning objective for learning a better generator . In a high-level abstraction, an accurate computer vision model ( or ) for assessing the quality of synthesized results can significantly help tackle neural rendering problems. However, there are two significant differences. First, perceptual distance aims to measure the discrepancy between an output instance and its ground truth, while conditional GANs measure the closeness of the conditional distributions of real and fake images. Second, for perceptual distance, the feature extractor is pre-trained and fixed, while conditional GANs adapt its discriminator on the fly according to the generator. In practice, the two methods are complementary, and many neural rendering applications use both losses simultaneously [wang2018pix2pixHD, sungatullina2018image]. Besides GANs, many promising research directions have recently emerged including Variational Autoencoders (VAEs) [jKingma2014], auto-regressive networks (e.g., PixelCNN [Oord:2016], PixelRNN [oord2016pixel, oord2016wavenet]), invertible density models [dinh2016density, kingma2018glow], among others. StarGAN [STARGAN2018] enables training a single model for image-to-image translation based on multiple datasets with different domains. To keep the discussion concise, we focus on GANs here. We urge our readers to review tutorials [doersch2016tutorial, kingma2019introduction] and course notes [openAIblog, StanfordCourse, IJCAICourse] for a complete picture of deep generative models.

4.2.4 Learning without Paired Data

Learning a generator with the above objectives requires hundreds to millions of paired training data. In many real-world applications, paired training data are difficult and expensive to collect. Different from labeling images for classification tasks, annotators have to label every single pixel for image synthesis tasks. For example, only a couple of small datasets exist for tasks like semantic segmentation. Obtaining input-output pairs for graphics tasks such as artistic stylization can be even more challenging since the desired output often requires artistic authoring and is sometimes not even well-defined. In this setting, the model is given a source set () and a target set (). All we know is which target domain the output should come from: i.e., like an image from domain . But given a particular input, we do not know which target image the output should be. There could be infinitely many mappings to project an image from to . Thus, we need additional constraints. Several constraints have been proposed including cycle-consistency loss for enforcing a bijective mapping [zhu2017unpaired, yi2017dualgan, kim2017learning], the distance preserving loss for encouraging that the output is close to the input image either in pixel space [shrivastava2017learning] or in feature embedding space [bousmalis2017unsupervised, taigman2017unsupervised], the weight sharing strategy for learning shared representation across domains [liu2016coupled, liu2017unsupervised, huang2018multimodal], etc. The above methods broaden the application scope of conditional GANs and enable many graphics applications such as object transfiguration, domain transfer, and CG2real.

5 Neural Rendering

Given high-quality scene specifications, classic rendering methods can render photorealistic images for a variety of complex real-world phenomena. Moreover, rendering gives us explicit editing control over all the elements of the scene—camera viewpoint, lighting, geometry and materials. However, building high-quality scene models, especially directly from images, requires significant manual effort, and automated scene modeling from images is an open research problem. On the other hand, deep generative networks are now starting to produce visually compelling images and videos either from random noise, or conditioned on certain user specifications like scene segmentation and layout. However, they do not yet allow for fine-grained control over scene appearance and cannot always handle the complex, non-local, 3D interactions between scene properties. In contrast, neural rendering methods hold the promise of combining these approaches to enable controllable, high-quality synthesis of novel images from input images/videos. Neural rendering techniques are diverse, differing in the control they provide over scene appearance, the inputs they require, the outputs they produce, and the network structures they utilize. A typical neural rendering approach takes as input images corresponding to certain scene conditions (for example, viewpoint, lighting, layout, etc.), builds a “neural” scene representation from them, and “renders” this representation under novel scene properties to synthesize novel images. The learned scene representation is not restricted by simple scene modeling approximations and can be optimized for high quality novel images. At the same time, neural rendering approaches incorporate ideas from classical graphics—in the form of input features, scene representations, and network architectures—to make the learning task easier, and the output more controllable.

We propose a taxonomy of neural rendering approaches along the axes that we consider the most important:

  • Control: What do we want to control and how do we condition the rendering on the control signal?

  • CG Modules: Which computer graphics modules are used and how are they integrated into a neural rendering pipeline?

  • Explicit or Implicit Control: Does the method give explicit control over the parameters or is it done implicitly by showing an example of what we expect to get as output?

  • Multi-modal Synthesis: Is the method trained to output multiple optional outputs, given a specific input?

  • Generality: Is the rendering approach generalized over multiple scenes/objects?

In the following, we discuss these axes that we use to classify current state-of-the-art methods (see also Table 1).

5.1 Control

Neural rendering aims to render high-quality images under user-specified scene conditions. In the general case, this is an open research problem. Instead, current methods tackle specific sub-problems like novel view synthesis [Hedman:2018:DBF:3272127.3275084, thies2018ignor, sitzmann2019deepvoxels, sitzmann2019srns], relighting under novel lighting [xu2018deeprelighting, relightables], and animating faces [Kim:2018:DVP:3197517.3201283, Thies:2019:DNR:3306346.3323035, Fried2019] and bodies [AbermanSLLCC19, Shysheya2019TexturedNA, Martin-Brualla:2018:LEP:3272127.3275099] under novel expressions and poses. A main axis in which these approaches differ is in how the control signal is provided to the network. One strategy is to directly pass the scene parameters as input to the first or an intermediate network layer [eslami2018neural]. Related strategies are to tile the scene parameters across all pixels of an input image, or concatenate them to the activations of an inner network layer [meka2019deep, sun2019single]. Another approach is to rely on the spatial structure of images and employ an image-to-image translation network to map from a “guide image” or “conditioning image” to the output. For example, such approaches might learn to map from a semantic mask to the output image [karacan2016learning, park2019SPADE, wang2018pix2pixHD, zhu2016generative, Bau:Ganpaint:2019, brock2017neural, chen2017photographic, pix2pix2016]. Another option, which we describe in the following, is to use the control parameters as input to a graphics layer.

5.2 Computer Graphics Modules

One emerging trend in neural rendering is the integration of computer graphics knowledge into the network design. Therefore, approaches might differ in the level of “classical” graphics knowledge that is embedded in the system. For example, directly mapping from the scene parameters to the output image does not make use of any graphics knowledge. One simple way to integrate graphics knowledge is a non-differentiable computer graphics module. Such a module can for example be used to render an image of the scene and pass it as dense conditioning input to the network [Kim:2018:DVP:3197517.3201283, LiuBody2018, Fried2019, Martin-Brualla:2018:LEP:3272127.3275099]. Many different channels could be provided as network inputs, such as a depth map, normal map, camera/world space position maps, albedo map, a diffuse rendering of the scene, and many more. This transforms the problem to an image-to-image translation task, which is a well researched setting, that can for example be tackled by a deep conditional generative model with skip connections. A deeper integration of graphics knowledge into the network is possible based on differentiable graphics modules. Such a differentiable module can for example implement a complete computer graphics renderer [Lombardi:2019:NVL:3306346.3323020, sitzmann2019srns], a 3D rotation [sitzmann2019deepvoxels, nguyen2018rendernet, hologan], or an illumination model [shu2017neural]

. Such components add a physically inspired inductive bias to the network, while still allowing for end-to-end training via backpropagation. This can be used to analytically enforce a truth about the world in the network structure, frees up network capacity, and leads to better generalization, especially if only limited training data is available.

5.3 Explicit vs. Implicit Control

Another way to categorize neural rendering approaches is by the type of control. Some approaches allow for explicit control, i.e., a user can edit the scene parameters manually in a semantically meaningful manner. For example, current neural rendering approaches allow for explicit control over camera viewpoint [xu2019deepviewsynthesis, thies2018ignor, hologan, eslami2018neural, Hedman:2018:DBF:3272127.3275084, Aliev2019, Meshry_2019_CVPR, nguyen2018rendernet, sitzmann2019srns, sitzmann2019deepvoxels], scene illumination [zhou2019portraitrelighting, xu2018deeprelighting, philip2019multiviewrelighting, meka2019deep, sun2019single], facial pose and expression [Lombardi:2018:DAM:3197517.3201401, Thies:2019:DNR:3306346.3323035, Wei:2019:VFA:3306346.3323030, Kim:2018:DVP:3197517.3201283, Geng:2018:WGS:3272127.3275043]. Other approaches only allow for implicit control by way of a representative sample. While they can copy the scene parameters from a reference image/video, one cannot manipulate these parameters explicitly. This includes methods that transfer human head motion from a reference video to a target person [zakharov2019few], or methods which retarget full-body motion [AbermanSLLCC19, Chan2018] Methods which allow for explicit control require training datasets with images/videos and their corresponding scene parameters.On the other hand, implicit control usually requires less supervision. These methods can be trained without explicit 3D scene parameters, only with weaker annotations. For example, while dense facial performance capture is required to train networks with explicit control for facial reenactment [Kim:2018:DVP:3197517.3201283, Thies:2019:DNR:3306346.3323035], implicit control can be achieved by training just on videos with corresponding sparse 2D keypoints [zakharov2019few].

5.4 Multi-modal Synthesis

Often times it is beneficial to have several different output options to choose from. For example, when only a subset of scene parameters are controlled, there potentially exists a large multi-modal output space with respect to the other scene parameters. Instead of being presented with one single output, the user can be presented with a gallery of several choices, which are visibly different from each other. Such a gallery helps the user better understand the output landscape and pick a result to their liking. To achieve various outputs which are significantly different from each other the network or control signals must have some stochasticity or structured variance. For example, variational auto-encoders 

[jKingma2014, larsen2016autoencoding] model processes with built-in variability, and can be used to achieve multi-modal synthesis [walker2016uncertain, xue2016visual, zhu2017toward]. The latest example is Park et al. [park2019SPADE], which demonstrates one way to incorporate variability and surfaces it via a user interface: given the same semantic map, strikingly different images are generated with the push of a button.

5.5 Generality

Neural rendering methods differ in their object specificity. Some methods aim to train a general purpose model once, and apply it to all instances of the task at hand [xu2019deepviewsynthesis, sitzmann2019srns, nguyen2018rendernet, hologan, Hedman:2018:DBF:3272127.3275084, eslami2018neural, Bau:Ganpaint:2019, park2019SPADE, zhu2016generative, brock2017neural, zakharov2019few, pix2pix2016, karacan2016learning, chen2017photographic, wang2018pix2pixHD]. For example, if the method operates on human heads, it will aim to be applicable to all humans. Conversely, other methods are instance-specific [Chan2018, LiuBody2018, Lombardi:2018:DAM:3197517.3201401, Wei:2019:VFA:3306346.3323030, AbermanSLLCC19, sitzmann2019deepvoxels, Lombardi:2019:NVL:3306346.3323020, Kim:2018:DVP:3197517.3201283, Fried2019, thies2018ignor, Aliev2019, Meshry_2019_CVPR, sitzmann2019srns]. Continuing our human head example, these networks will operate on a single person (with a specific set of clothes, in a specific location) and a new network will have to be retrained for each new subject. For many tasks, object specific approaches are currently producing higher quality results, at the cost of lengthy training times for each object instance. For real-world applications such training times are prohibitive—improving general models is an open problem and an exciting research direction.

6 Applications of Neural Rendering

Method angle=45,lap=0pt-(1em)Required Data angle=45,lap=0pt-(1em)Network Inputs angle=45,lap=0pt-(1em)Network Outputs angle=45,lap=0pt-(1em)Contents angle=45,lap=0pt-(1em)Controllable Parameters angle=45,lap=0pt-(1em)Explicit Control angle=45,lap=0pt-(1em)CG Module angle=45,lap=0pt-(1em)Generality angle=45,lap=0pt-(1em)Multi-modal Synthesis angle=45,lap=0pt-(1em)Temporal Coherence
Bau et al. [Bau:Ganpaint:2019] IS IS I RE S Semantic Photo Synthesis (Section 6.1)
Brock et al. [brock2017neural] I N I S R
Chen and Koltun [chen2017photographic] IS S I RE S
Isola et al. [pix2pix2016] IS S I ES S
Karacan et al. [karacan2016learning] IS S I E S
Park et al. [park2019SPADE] IS S I RE S
Wang et al. [wang2018pix2pixHD] IS S I RES S
Zhu et al. [zhu2016generative] I N I ES RT
Aliev et al. [Aliev2019] ID R I RS C N Novel View Synthesis (Section 6.2)
Eslami et al. [eslami2018neural] IC IC I RS C
Hedman et al. [Hedman:2018:DBF:3272127.3275084] V I I RES C N
Meshry et al. [Meshry_2019_CVPR] I IL I RE CL N m̧ark
Nguyen-Phuoc et al. [nguyen2018rendernet] ICL E I S CL N
Nguyen-Phuoc et al. [hologan] I NC I S C
Sitzmann et al. [sitzmann2019deepvoxels] V IC I S C D
Sitzmann et al. [sitzmann2019srns] IC IC I S C D
Thies et al. [thies2018ignor] V IRC I S C N
Xu et al. [xu2019deepviewsynthesis] IC IC I S C
Lombardi et al. [Lombardi:2019:NVL:3306346.3323020] VC IC I HPS C D Free Viewpoint Video (Section 6.3)
Martin-Brualla et al. [Martin-Brualla:2018:LEP:3272127.3275099] VDC R V P C N
Pandey et al. [Pandey_2019_CVPR] VDI IDC I P C
Shysheya et al. [Shysheya2019TexturedNA] V R I P CP
Meka et al. [meka2019deep] IL IL I H L Relighting
(Section 6.4)
Philip et al. [philip2019multiviewrelighting] I IL I E L N
Sun et al. [sun2019single] IL IL IL H L
Xu et al. [xu2018deeprelighting] IL IL I S L
Zhou et al. [zhou2019portraitrelighting] IL IL IL H L
Fried et al. [Fried2019] VT VR V H H N Facial Reenactment (Section 6.5)
Kim et al. [Kim:2018:DVP:3197517.3201283] V R V H PE N
Lombardi et al. [Lombardi:2018:DAM:3197517.3201401] VC IMC MX H CP N
Thies et al. [Thies:2019:DNR:3306346.3323035] V IRC I HS CE D
Wei et al. [Wei:2019:VFA:3306346.3323030] VC I MX H CP D
Zakharov et al. [zakharov2019few] I IK I H PE
Aberman et al. [AbermanSLLCC19] V J V P P Body Reenactment (Section 6.5)
Chan et al. [Chan2018] V J V P P
Liu et al. [LiuBody2018] VM R V P P N

Inputs and Outputs Control Misc
Table 1: Selected methods presented in this survey. See Section 6 for explanation of attributes in the table and their possible values.

Neural rendering has many important use cases such as semantic photo manipulation, novel view synthesis, relighting, free-viewpoint video, as well as facial and body reenactment. Table 1 provides an overview of various applications discussed in this survey. For each, we report the following attributes:

The following is a detailed discussion of various neural rendering applications.

6.1 Semantic Photo Synthesis and Manipulation

Semantic photo synthesis and manipulation enable interactive image editing tools for controlling and modifying the appearance of a photograph in a semantically meaningful way. The seminal work Image Analogies [hertzmann2001image] creates new texture given a semantic layout and a reference image, using patch-based texture synthesis [efros1999texture, efros2001image]. Such single-image patch-based methods [hertzmann2001image, wexler07spacetime, simakov08bidir, barnes2009patchmatch] enable image reshuffling, retargeting, and inpainting, but they cannot allow high-level operations such as adding a new object or synthesizing an image from scratch. Data-driven graphics systems create new imagery by compositing multiple image regions [perez2003poisson] from images retrieved from a large-scale photo collection [lalonde2007photo, chen2009sketch2photo, johnson2006semantic, hays2007scene, Eitz:2009:PhotoSketch]. These methods allow the user to specify a desired scene layout using inputs such as a sketch [chen2009sketch2photo] or a semantic label map [johnson2006semantic]. The latest development is OpenShapes [bansal2019shapes], which composes regions by matching scene context, shapes, and parts. While achieving appealing results, these systems are often slow as they search in a large image database. In addition, undesired artifacts can be sometimes spotted due to visual inconsistency between different images.

Figure 1: GauGAN [park2019SPADE, park2019GauGAN] enables image synthesis with both semantic and style control. Please see the SIGGRAPH 2019 Real-Time Live for more details. Images taken from Park et al. [park2019SPADE].

6.1.1 Semantic Photo Synthesis

In contrast to previous non-parametric approaches, recent work has trained fully convolutional networks [long2015fully] with a conditional GANs objective [MirzaO2014, pix2pix2016] to directly map a user-specified semantic layout to a photo-realistic image [pix2pix2016, karacan2016learning, liu2017unsupervised, zhu2017unpaired, yi2017dualgan, huang2018multimodal, wang2018pix2pixHD]. Other types of user inputs such as color, sketch, and texture have also been supported [sangkloy2017scribbler, iizuka2016let, zhang2016colorful, xian2018texturegan]. Among these, pix2pix [pix2pix2016] and the method of Karacan et al. karacan2016learning present the first learning-based methods for semantic image synthesis including generating street view and natural scene images. To increase the image resolution, Cascaded refinement networks [chen2017photographic] learn a coarse-to-fine generator, trained with a perceptual loss [gatys2016image]. The results are high-resolution, but lack high frequency texture and details. To synthesize richer details, pix2pixHD  [wang2018pix2pixHD] proposes a conditional GAN that can generate results with realistic texture. The key extensions compared to pix2pix[pix2pix2016] include a coarse-to-fine generator similar to CRN [chen2017photographic], multi-scale discriminators that capture local image statistics at different scales, and a multi-scale discriminator-based feature matching objective, that resembles perceptual distance [gatys2016image], but uses an adaptive discriminator to extract task-specific features instead. Notably, the multi-scale pipeline, a decades-old scheme in vision and graphics [burt1983laplacian, brown2003recognising], is still highly effective for deep image synthesis. Both pix2pixHD and BicycleGAN [zhu2017toward] can synthesize multiple possible outputs given the same user input, allowing a user to choose different styles. Subsequent systems [wang2018video, bansal2018recycle, bashkirova2018unsupervised] extend to the video domain, allowing a user to control the semantics of a video. Semi-parametric systems [qi2018semi, bansal2019shapes] combine classic data-driven image compositing [lalonde2007photo] and feed-forward networks [long2015fully].

Most recently, GauGAN [park2019SPADE, park2019GauGAN] uses a SPatially-Adaptive (DE)normalization layer (SPADE) to better preserve semantic information in the generator. While previous conditional models [pix2pix2016, wang2018pix2pixHD] process a semantic layout through multiple normalization layers (e.g., InstanceNorm [ulyanov2016instance]), the channel-wise normalization layers tend to “wash away” semantic information, especially for uniform and flat input layout regions. Instead, the GauGAN generator takes a random latent vector as an image style code, and employs multiple ResNet blocks with spatially-adaptive normalization layers (SPADE), to produce the final output. As shown in Figure 1, this design not only produces visually appealing results, but also enables better user control over style and semantics. The adaptive normalization layers have also been found to be effective for stylization [huang2017arbitrary] and super-resolution [wang2018recovering].

Figure 2: GANPaint[Bau:Ganpaint:2019] enables a few high-level image editing operations. A user can add, remove, or alter an object in an image with simple brush tools. A deep generative model will then satisfy user’s constraint while preserving natural image statistics. Images taken from Bau et al. [Bau:Ganpaint:2019].

6.1.2 Semantic Image Manipulation

The above image synthesis systems excel at creating new visual content, given user controls as inputs. However, semantic image manipulation of a user provided image with deep generative models remains challenging for two reasons. First, editing an input image requires accurately reconstructing it with the generator, which is a difficult task even with recent GANs. Second, once the controls are applied, the newly synthesized content might not be compatible with the input photo. To address these issues, iGAN [zhu2016generative] proposes using an unconditional GAN as a natural image prior for image editing tasks. The method first optimizes a low-dimensional latent vector such that the GAN can faithfully reproduce an input photo. The reconstruction method combines quasi-Newton optimization with encoder-based initialization. The system then modifies the appearance of the generated image using color, sketch, and warping tools. To render the result, they transfer the edits from the generated image to the original photo using guided image filtering [he2012guided]. Subsequent work on Neural Photo Editing [brock2017neural] uses a VAE-GAN [larsen2016autoencoding] to encode an image into a latent vector and generates an output by blending the modified content and the original pixels. The system allows semantic editing of faces, such as adding a beard. Several works [perarnau2016invertible, yao20183d, hong2018learning]

train an encoder together with the generator. They deploy a second encoder to predict additional image attributes (e.g., semantics, 3D information, facial attributes) and allow a user to modify these attributes. This idea of using GANs as a deep image prior was later used in image inpainting 

[yeh2017semantic] and deblurring [asim2018blind]. The above systems work well on a low-resolution image with a single object or of a certain class and often require post-processing (e.g., filtering and blending) as the direct GANs’ results are not realistic enough. To overcome these challenges, GANPaint [Bau:Ganpaint:2019] adapts a pre-trained GAN model to a particular image. The learned image-specific GAN combines the prior learned from the entire image collection and image statistics of that particular image. Similar to prior work [zhu2016generative, brock2017neural], the method first projects an input image into a latent vector. The reconstruction from the vector is close to the input, but many visual details are missing. The method then slightly changes the network’s internal parameters to reconstruct more precisely the input image. During test time, GANPaint modifies intermediate representations of GANs [bau2019gandissect] according to user inputs. Instead of training a randomly initialized CNN on a single image as done in Deep Image Prior [ulyanov2018deep], GANPaint leverages the prior learned from a pre-trained generative model and fine-tunes it for each input image. As shown in Figure 2, this enables addition and removal of certain objects in a realistic manner. Learning distribution priors via pre-training, followed by fine-tuning on limited data, is useful for many One-shot and Few-shot synthesis scenarios [10.5555/3326943.3327138, liu2019few].

6.1.3 Improving the Realism of Synthetic Renderings

The methods discussed above use deep generative models to either synthesize images from user-specified semantic layouts, or modify a given input image in a semantically meaningful manner. As noted before, rendering methods in computer graphics have been developed for the exact same goal—generating photorealistic images from scene specifications. However, the visual quality of computer rendered images depends on the fidelity of the scene modeling; using low-quality scene models and/or rendering methods results in images that look obviously synthetic. Johnson et al. [johnson2011cg2real] addressed this issue by improving the realism of synthetic renderings using content from similar, real photographs retrieved from a large-scale photo collection. However, this approach is restricted by the size of the database and the simplicity of the matching metric. Bi et al. [bi2019cg2real] propose using deep generative models to accomplish this task. They train a conditional generative model to translate a low-quality rendered image (along with auxiliary information like scene normals and diffuse albedo) to a high-quality photorealistic image. They propose performing this translation on an albedo-shading decomposition (instead of image pixels) to ensure that textures are preserved. Shrivastava et al. [shrivastava2017learning] learn to improve the realism of renderings of the human eyes based on unlabeled real images and Mueller et al. [GANeratedHands_CVPR2018] employs a similar approach for human hands. Hoffman et al. [hoffman2018cycada] extends CycleGAN [zhu2017unpaired] with feature matching to improve the realism of street view rendering for domain adaptation. Along similar lines, Nalbach et al. [Nalbach2017b] propose using deep convolutional networks to convert shading buffers such as per-pixel positions, normals, and material parameters into complex shading effects like ambient occlusion, global illumination, and depth-of-field, thus significantly speeding up the rendering process. The idea of using coarse renderings in conjunction with deep generative models to generate high-quality images has also been used in approaches for facial editing [Kim:2018:DVP:3197517.3201283, Fried2019].

6.2 Novel View Synthesis for Objects and Scenes

Novel view synthesis is the problem of generating novel camera perspectives of a scene given a fixed set of images of the same scene. Novel view synthesis methods thus deal with image and video synthesis conditioned on camera pose. Key challenges underlying novel view synthesis are inferring the scene’s 3D structure given sparse observations, as well as inpainting of occluded and unseen parts of the scene. In classical computer vision, image-based rendering (IBR) methods [Debevec1998, Chaurasia:2013:DSL:2487228.2487238] typically rely on optimization-based multi-view stereo methods to reconstruct scene geometry and warp observations into the coordinate frame of the novel view. However, if only few observations are available, the scene contains view-dependent effects, or a large part of the novel perspective is not covered by the observations, IBR may fail, leading to results with ghosting-like artifacts and holes. Neural rendering approaches have been proposed to generate higher quality results. In Neural Image-based Rendering [Hedman:2018:DBF:3272127.3275084, Meshry_2019_CVPR], previously hand-crafted parts of the IBR pipeline are replaced or augmented by learning-based methods. Other approaches reconstruct a learned representation of the scene from the observations, learning it end-to-end with a differentiable renderer. This enables learning of priors on geometry, appearance and other scene properties in a learned feature space. Such neural scene representation-based approaches range from prescribing little structure on the representation and the renderer [eslami2018neural], to proposing 3D-structured representations such as voxel grids of features [sitzmann2019srns, hologan], to explicit 3D disentanglement of voxels and texture [zhu2018visual], point clouds [Meshry_2019_CVPR], multi-plane images [xu2019deepviewsynthesis, Flynn19DeepView], or implicit functions [saito2019pifu, sitzmann2019srns] which equip the network with inductive biases on image formation and geometry. Neural rendering approaches have made significant progress in previously open challenges such as the generation of view-dependent effects [thies2018ignor, xu2019deepviewsynthesis] or learning priors on shape and appearance from extremely sparse observations [sitzmann2019srns, xu2019deepviewsynthesis]. While neural rendering shows better results compared to classical approaches, it still has limitations. I.e., they are restricted to a specific use-case and are limited by the training data. Especially, view-dependent effects such as reflections are still challenging.

6.2.1 Neural Image-based Rendering

Neural Image-based Rendering (N-IBR) is a hybrid between classical image-based rendering and deep neural networks that replaces hand-crafted heuristics with learned components. A classical IBR method uses a set of captured images and a proxy geometry to create new images, e.g., from a different viewpoint. The proxy geometry is used to reproject image content from the captured images to the new target image domain. In the target image domain, the projections from the source images are blended to composite the final image. This simplified process gives accurate results only for diffuse objects with precise geometry reconstructed with a sufficient number of captured views. However, artifacts such as ghosting, blur, holes, or seams can arise due to view-dependent effects, imperfect proxy geometry or too few source images. To address these issues, N-IBR methods replace the heuristics often found in classical IBR methods with learned blending functions or corrections that take into account view-dependent effects. DeepBlending [Hedman:2018:DBF:3272127.3275084] proposes a generalized network to predict blending weights of the projected source images for compositing in the target image space. They show impressive results on indoor scenes with fewer blending artifacts than classical IBR methods. In Image-guided Neural Object Rendering [thies2018ignor], a scene specific network is trained to predict view-dependent effects with a network called EffectsNet. It is used to remove specular highlights from the source images to produce diffuse-only images, which can be projected into a target view without copying the source views’ highlights. This EffectsNet is trained in a Siamese fashion on two different views at the same time, enforcing a multi-view photo consistency loss. In the target view, new view-dependent effects are reapplied and the images are blended using a U-Net-like architecture. As a result, this method demonstrates novel view point synthesis on objects and small scenes including view-dependent effects.

(a) Input deep buffer
(b) Output renderings
Figure 3: Neural Rerendering in the Wild [Meshry_2019_CVPR] reconstructs a proxy 3D model from a large-scale internet photo collection. This model is rendered into a deep buffer of depth, color and semantic labels (a). A neural rerendering network translates these buffers into realistic renderings under multiple appearances (b). Images taken from Meshry et al. [Meshry_2019_CVPR].

6.2.2 Neural Rerendering

Neural Rerendering combines classical 3D representation and renderer with deep neural networks that rerender the classical render into a more complete and realistic views. In contrast to Neural Image-based Rendering (N-IBR), neural rerendering does not use input views at runtime, and instead relies on the deep neural network to recover the missing details. Neural Rerendering in the Wild [Meshry_2019_CVPR] uses neural rerendering to synthesize realistic views of tourist landmarks under various lighting conditions, see Figure 3. The authors cast the problem as a multi-modal image synthesis problem that takes as input a rendered deep buffer, containing depth and color channels, together with an appearance code, and outputs realistic views of the scene. The system reconstructs a dense colored point cloud from internet photos using Structure-from-Motion and Multi-View Stereo, and for each input photo, renders the recovered point cloud into the estimated camera. Using pairs of real photos and corresponding rendered deep buffers, a multi-modal image synthesis pipeline learns an implicit model of appearance, that represents time of day, weather conditions and other properties not present in the 3D model. To prevent the model from synthesizing transient objects, like pedestrians or cars, the authors propose conditioning the rerenderer with a semantic labeling of the expected image. At inference time, this semantic labeling can be constructed to omit any such transient objects. Pittaluga et al. [pittaluga2019revealing] use a neural rerendering technique to invert Structure-from-Motion reconstructions and highlight the privacy risks of Structure-from-Motion 3D reconstructions, that typically contain color and SIFT features. The authors show how sparse point clouds can be inverted and generate realistic novel views from them. In order to handle very sparse inputs, they propose a visibility network that classifies points as visible or not and is trained with ground-truth correspondences from 3D reconstructions.

6.2.3 Novel View Synthesis with Multiplane Images

Given a sparse set of input views of an object, Xu et al. [xu2019deepviewsynthesis] also address the problem of rendering the object from novel viewpoints (see Figure 4

). Unlike previous view interpolation methods that work with images captured under natural illumination and at a small baseline, they aim to capture the light transport of the scene, including view-dependent effects like specularities. Moreover they attempt to do this from a sparse set of images captured at large baselines, in order to make the capture process more light-weight. They capture six images of the scene under point illumination in a cone of about

and render any novel viewpoint within this cone. The input images are used to construct a plane sweeping volume aligned with the novel viewpoint[flynn2016deepstereo]. This volume is processed by 3D CNNs to reconstruct both scene depth and appearance. To handle the occlusions caused by the large baseline, they propose predicting attention maps that capture the visibility of the input viewpoints at different pixels. These attention maps are used to modulate the appearance plane sweep volume and remove inconsistent content. The network is trained on synthetically rendered data with supervision on both geometry and appearance; at test time it is able to synthesize photo-realistic results of real scenes featuring high-frequency light transport effects such as shadows and specularities.

Figure 4: Xu et al. [xu2019deepviewsynthesis] render scene appearance from a novel viewpoint, given only six sparse, wide baseline views. Images taken from Xu et al. [xu2019deepviewsynthesis].

DeepView [Flynn19DeepView] is a technique to visualize light fields under novel views. The view synthesis is based on multi-plane images [zhou2018stereomag] that are estimated by a learned gradient descent method given a sparse set of input views. Similar to image-based rendering, the image planes can be warped to new views and are rendered back-to-front into the target image.

6.2.4 Neural Scene Representation and Rendering

While neural rendering methods based on multi-plane images and image-based rendering have enabled some impressive results, they prescribe the model’s internal representation of the scene as a point cloud, a multi-plane image, or a mesh, and do not allow the model to learn an optimal representation of the scene’s geometry and appearance. A recent line in novel view synthesis is thus to build models with neural scene representations: learned, feature-based representations of scene properties. The Generative Query Network [eslami2018neural] is a framework for learning a low-dimensional feature embedding of a scene, explicitly modeling the stochastic nature of such a neural scene representation due to incomplete observations. A scene is represented by a collection of observations, where each observation is a tuple of an image and its respective camera pose. Conditioned on a set of context observations and a target camera pose, the GQN parameterizes a distribution over frames observed at the target camera pose, consistent with the context observations. The GQN is trained by maximizing the log-likelihood of each observation given other observations of the same scene as context. Given several context observations of a single scene, a convolutional encoder encodes each of them into a low-dimensional latent vector. These latent vectors are aggregated to a single representation

via a sum. A convolutional Long-Short Term Memory network (ConvLSTM) parameterizes an auto-regressive prior distribution over latent variables

. At every timestep, the hidden state of the ConvLSTM is decoded into a residual update to a canvas that represents the sampled observation. To make the optimization problem tractable, the GQN uses an approximate posterior at training time. The authors demonstrate the capability of the GQN to learn a rich feature representation of the scene on novel view synthesis, control of a simulated robotic arm, and the exploration of a labyrinth environment. The probabilistic formulation of the GQN allows the model to sample different frames all consistent with context observations, capturing, for instance, the uncertainty about parts of the scene that were occluded in context observations.

6.2.5 Voxel-based Novel View Synthesis Methods

While learned, unstructured neural scene representations are an attractive alternative to hand-crafted scene representations, they come with a number of drawbacks. First and foremost, they disregard the natural 3D structure of scenes. As a result, they fail to discover multi-view and perspective geometry in regimes of limited training data. Inspired by recent progress in geometric deep learning [kar2017stereo, choy20163d, Rezende2016unsupervised, henzler2018escaping], a line of neural rendering approaches has emerged that instead proposes to represent the scene as a voxel grid, thus enforcing 3D structure.

RenderNet [nguyen2018rendernet]

proposes a convolutional neural network architecture that implements differentiable rendering from a scene explicitly represented as a 3D voxel grid. The model is retrained for each class of objects and requires tuples of images with labeled camera pose. RenderNet enables novel view synthesis, texture editing, relighting, and shading. Using the camera pose, the voxel grid is first transformed to camera coordinates. A set of 3D convolutions extracts 3D features. The 3D voxel grid of features is translated to a 2D feature map via a subnetwork called the “projection unit.” The projection unit first collapses the final two channels of the 3D feature voxel grid and subsequently reduces the number of channels via 1x1 2D convolutions. The 1x1 convolutions have access to all features along a single camera ray, enabling them to perform projection and visibility computations of a typical classical renderer. Finally, a 2D up-convolutional neural network upsamples the 2D feature map and computes the final output. The authors demonstrate that RenderNet learns to render high-resolution images from low-resolution voxel grids. RenderNet can further learn to apply varying textures and shaders, enabling scene relighting and novel view synthesis of the manipulated scene. They further demonstrate that RenderNet may be used to recover a 3D voxel grid representation of a scene from single images via an iterative reconstruction algorithm, enabling subsequent manipulation of the representation.

DeepVoxels [sitzmann2019deepvoxels] enables joint reconstruction of geometry and appearance of a scene and subsequent novel view synthesis. DeepVoxels is trained on a specific scene, given only images as well as their extrinsic and intrinsic camera parameters – no explicit scene geometry is required. This is achieved by representing a scene as a Cartesian 3D grid of embedded features, combined with a network architecture that explicitly implements image formation using multi-view and projective geometry operators. Features are first extracted from 2D observations. 2D features are then un-projected by replicating them along their respective camera rays, and integrated into the voxel grid by a small 3D U-net. To render the scene with given camera extrinsic and intrinsic parameters, a virtual camera is positioned in world coordinates. Using the intrinsic camera parameters, the voxel grid is resampled into a canonical view volume. To reason about occlusions, the authors propose an occlusion reasoning module. The occlusion module is implemented as a 3D U-Net that receives as input all the features along a camera ray as well as their depth, and produces as output a visibility score for each feature along the ray, where the scores along each ray sum to one. The final projected feature is then computed as a weighted sum of features along each ray. Finally, the resulting 2D feature map is translated to an image using a small U-Net. As a side-effect of the occlusion reasoning module, DeepVoxels produces depth maps in an unsupervised fashion. The model is fully differentiable from end to end, and is supervised only by a 2D re-rendering loss enforced over the training set. The paper shows wide-baseline novel view synthesis on several challenging scenes, both synthetic and real, and outperforms baselines that do not use 3D structure by a wide margin.

Visual Object Networks (VONs) [zhu2018visual] is a 3D-aware generative model for synthesizing the appearance of objects with a disentangled 3D representation. Inspired by classic rendering pipelines, VON decomposes the neural image formation model into three factors—viewpoint, shape, and texture. The model is trained with an end-to-end adversarial learning framework that jointly learns the distribution of 2D images and 3D shapes through a differentiable projection module. During test time, VON can synthesize a 3D shape, its intermediate 2.5D depth representation, and a final 2D image all at once. This 3D disentanglement allows users to manipulate the shape, viewpoint, and texture of an object independently.

HoloGAN [hologan] builds on top of the learned projection unit of RenderNet to build an unconditional generative model that allows explicit viewpoint changes. It implements an explicit affine transformation layer that directly applies view manipulations to learnt 3D features. As in DeepVoxels, the network learns a 3D feature space, but more bias about the 3D object / scene is introduced by transforming these deep voxels with a random latent vector . In this way, an unconditional GAN that natively supports viewpoint changes can be trained in an unsupervised fashion. Notably, HoloGAN requires neither pose labels and intrinsic camera information nor multiple views of an object.

6.2.6 Implicit-function based Approaches

While 3D voxel grids have demonstrated that a 3D-structured scene representation benefits multi-view consistent scene modeling, their memory requirement scales cubically with spatial resolution, and they do not parameterize surfaces smoothly, requiring a neural network to learn priors on shape as joint probabilities of neighboring voxels. As a result, they cannot parameterize large scenes at a sufficient spatial resolution, and have so far failed to generalize shape and appearance across scenes, which would allow applications such as reconstruction of scene geometry from only few observations. In geometric deep learning, recent work alleviated these problems by modeling geometry as the level set of a neural network 

[park2019deepsdf, mescheder2019occupancy]. Recent neural rendering work generalizes these approaches to allow rendering of full color images. In addition to parameterizing surface geometry via an implicit function, Pixel-Aligned Implicit Functions [saito2019pifu] represent object color via an implicit function. An image is first encoded into a pixel-wise feature map via a convolutional neural network. A fully connected neural network then takes as input the feature at a specific pixel location as well as a depth value , and classifies the depth as inside/outside the object. The same architecture is used to encode color. The model is trained end-to-end, supervised with images and 3D geometry. The authors demonstrate single- and multi-shot 3D reconstruction and novel view synthesis of clothed humans.

Figure 5: Scene Representation Networks [sitzmann2019srns] allow reconstruction of scene geometry and appearance from images via a continuous, 3D-structure-aware neural scene representation, and subsequent, multi-view-consistent view synthesis. By learning strong priors, they allow full 3D reconstruction from only a single image (bottom row, surface normals and color render). Images taken from Sitzmann et al. [sitzmann2019srns].

Scene Representation Networks (SRNs)  [sitzmann2019srns] encodes both scene geometry and appearance in a single fully connected neural network, the SRN, that maps world coordinates to a feature representation of local scene properties. A differentiable, learned neural ray-marcher is trained end-to-end given only images and their extrinsic and intrinsic camera parameters—no ground-truth shape information is required. The SRN takes as input world coordinates and computes a feature embedding. To render an image, camera rays are traced to their intersections with scene geometry (if any) via a differentiable, learned raymarcher, which computes the length of the next step based on the feature returned by the SRN at the current intersection estimate. The SRN is then sampled at ray intersections, yielding a feature for every pixel. This 2D feature map is translated to an image by a per-pixel fully connected network. Similarly to DeepSDF [park2019deepsdf], SRNs generalize across scenes in the same class by representing each scene by a code vector . The code vectors are mapped to the parameters of a SRN via a fully connected neural network, a so-called hypernetwork. The parameters of the hypernetwork are jointly optimized with the code vectors and the parameters of the pixel generator. The authors demonstrate single-image reconstruction of geometry and appearance of objects in the ShapeNet dataset (Figure 5), as well as multi-view consistent view synthesis. Due to their per-pixel formulation, SRNs generalize to completely unseen camera poses like zoom or camera roll.

6.3 Free Viewpoint Videos

Free Viewpoint Videos, also known as Volumetric Performance Capture, rely on multi-camera setups to acquire the 3D shape and texture of performers. The topic has gained a lot of interest in the research community starting from the early work of Tanco and Hilton [hilton2] and reached compelling high quality results with the works of Collet et al. [fvv] and its real-time counterpart by Dou et al. [dou16, dou17]. Despite the efforts, these systems lack photorealism due to missing high frequency details [holoportation] or baked in texture [fvv], which does not allow for accurate and convincing re-lighting of these models in arbitrary scenes. Indeed, volumetric performance capture methods lack view dependent effects (e.g. specular highlights); moreover, imperfections in the estimated geometry usually lead to blurred texture maps. Finally, creating temporally consistent 3D models [fvv] is very challenging in many real world cases (e.g. hair, translucent materials). A recent work on human performance capture by Guo et al. [relightables] overcomes many of these limitations by combining traditional image based relighting methods [lightstage1] with recent advances in high-speed and accurate depth sensing [need4speed, sos]. In particular, this system uses MP RGB cameras combined with MP active IR sensors to recover very accurate geometry. During the capture, the system interleaves two different lighting conditions based on spherical gradient illumination [Fyffe:2009]. This produces an unprecedented level of photorealism for a volumetric capture pipeline (Figure 6).

Figure 6: The Relightables system by Guo et al. [relightables] for free viewpoint capture of humans with realistic re-lighting. From left to right: a performer captured in the Lightstage, the estimated geometry and albedo maps, examples of relightable volumetric videos. Images taken from Guo et al. [relightables].

Despite steady progress and encouraging results obtained by these 3D capture systems, they still face important challenges and limitations. Translucent and transparent objects cannot be easily captured; reconstructing thin structures (e.g. hair) is still very challenging even with high resolution depth sensors. Nevertheless, these multi-view setups provide the foundation for machine learning methods [Martin-Brualla:2018:LEP:3272127.3275099, Lombardi:2018:DAM:3197517.3201401, Lombardi:2019:NVL:3306346.3323020, Pandey_2019_CVPR], which heavily rely on training data to synthesize high quality humans in arbitrary views and poses.

6.3.1 LookinGood with Neural Rerendering

The LookinGood system by Martin-Brualla et al. [Martin-Brualla:2018:LEP:3272127.3275099] introduced the concept of neural rerendering for performance capture of human actors. The framework relies on a volumetric performance capture system [dou17], which reconstructs the performer in real-time. These models can then be rendered from arbitrary viewpoints using the known geometry. Due to real-time constraints, the reconstruction quality suffers from many artifacts such as missing depth, low resolution texture and oversmooth geometry. Martin-Brualla et al. propose to add “witness cameras”, which are high resolution RGB sensors (MP) that are not used in the capture system, but can provide training data for a deep learning architecture to re-render the output of the geometrical pipeline. The authors show that this enables high quality re-rendered results for arbitrary viewpoints, poses and subjects in real-time.

Figure 7: The LookinGood system [Martin-Brualla:2018:LEP:3272127.3275099] uses real-time neural re-rendering to enhance performance capture systems. Images taken from Martin-Brualla et al. [Martin-Brualla:2018:LEP:3272127.3275099].

The problem at hand is very interesting since it is tackling denoising, in-painting and super-resolution simultaneously and in real-time. In order to solve this, authors cast the task into an image-to-image translation problem [pix2pix2016] and they introduce a semantic aware loss function that uses the semantic information (available at training time) to increase the weights of the pixels belonging to salient areas of the image and to retrieve precise silhouettes ignoring the contribution of the background. The overall system is shown in Figure 7.

6.3.2 Neural Volumes

Neural Volumes [Lombardi:2019:NVL:3306346.3323020] addresses the problem of automatically creating, rendering, and animating high-quality object models from multi-view video data (see Figure 8). The method trains a neural network to encode frames of a multi-view video sequence into a compact latent code which is decoded into a semi-transparent volume containing RGB and opacity values at each location. The volume is rendered by raymarching from the camera through the volume, accumulating color and opacity to form an output image and alpha matte. Formulating the problem in 3D rather than in screen space has several benefits: viewpoint interpolation is improved because the object must be representable as a 3D shape, and the method can be easily combined with traditional triangle-mesh rendering. The method produces high-quality models despite using a low-resolution voxel grid () by introducing a learned warp field that not only helps to model the motion of the scene but also reduces blocky voxel grid artifacts by deforming voxels to better match the geometry of the scene and allows the system to shift voxels to make better use of the voxel resolution available. The warp field is modeled as a spatially-weighted mixture of affine warp fields, which can naturally model piecewise deformations. By virtue of the semi-transparent volumetric representation, the method can reconstruct challenging objects such as moving hair, fuzzy toys, and smoke all from only 2D multi-view video with no explicit tracking required. The latent space encoding enables animation by generating new latent space trajectories or by conditioning the decoder on some information like head pose.

Figure 8: Pipeline for Neural Volumes [Lombardi:2019:NVL:3306346.3323020]. Multi-view capture is input to an encoder to produce a latent code . is decoded to a volume that stores RGB values, as well as a warp field. Differentiable ray marching renders the volume into an image, allowing the system to be trained by minimizing the difference between rendered and target images. Images taken from Lombardi et al. [Lombardi:2019:NVL:3306346.3323020].

6.3.3 Free Viewpoint Videos from a Single Sensor

The availability of multi-view images at training and test time is one of the key elements for the success of free viewpoint systems. However this capture technology is still far from being accessible to a typical consumer who, at best, may own a single RGBD sensor such as a Kinect. Therefore, parallel efforts [Pandey_2019_CVPR] try to make the capture technology accessible through consumer hardware by dropping the infrastructure requirements through deep learning. Reconstructing performers from a single image is very related to the topic of synthesizing humans in unseen poses [acm18, balakrishnan_cvpr18, si_2018_CVPR, ma18, ma17, Guler2018DensePose, Chan2018]. Differently from the other approaches, the recent work of Pandey et al. [Pandey_2019_CVPR] synthesizes performers in unseen poses and from arbitrary viewpoints, mimicking the behavior of volumetric capture systems. The task at hand is much more challenging because it requires disentangling pose, texture, background and viewpoint. Pandey et al. propose to solve this problem by leveraging a semi-parametric model. In particular they assume that a short calibration sequence of the user is available: e.g. the user rotates in front of the camera before the system starts. Multiple deep learning stages learn to combine the current viewpoint (which contains the correct user pose and expression) with the pre-recorded calibration images (which contain the correct viewpoint but wrong poses and expressions). The results are compelling given the substantial reduction in the infrastructure required.

6.4 Learning to Relight

Photo-realistically rendering of a scene under novel illumination—a procedure known as “relighting”—is a fundamental component of a number of graphics applications including compositing, augmented reality and visual effects. An effective way to accomplish this task is to use image-based relighting methods that take as input images of the scene captured under different lighting conditions (also known as a “reflectance field”), and combine them to render the scene’s appearance under novel illumination [lightstage1]. Image-based relighting can produce high-quality, photo-realistic results and has even been used for visual effects in Hollywood productions. However, these methods require slow data acquisition with expensive, custom hardware, precluding the applicability of such methods to settings like dynamic performance and “in-the-wild” capture. Recent methods address these limitations by using synthetically rendered or real, captured reflectance field data to train deep neural networks that can relight scenes from just a few images.

6.4.1 Deep Image-based Relighting from Sparse Samples

Xu et al.[xu2018deeprelighting] propose an image-based relighting method that can relight a scene from a sparse set of five images captured under learned, optimal light directions. Their method uses a deep convolutional neural network to regress a relit image under an arbitrary directional light from these five images. Traditional image-based relighting methods rely on the linear superposition property of lighting, and thus require tens to hundreds of images for high-quality results. Instead, by training a non-linear neural relighting network, this method is able to accomplish relighting from sparse images. The relighting quality depends on the input light directions, and the authors propose combining a custom-designed sampling network with the relighting network, in an end-to-end fashion, to jointly learn both the optimal input light directions and the relighting function. The entire system is trained on a large synthetic dataset comprised of procedurally generated shapes rendered with complex, spatially-varying reflectances. At test time, the method is able to relight real scenes and reproduce complex, high-frequency lighting effects like specularities and cast shadows.

Figure 9: Xu et al. [xu2018deeprelighting] are able to generate relit versions of a scene under novel directional and environment map illumination from only five images captured under specified directional lights. Images taken from Xu et al. [xu2018deeprelighting].

6.4.2 Multi-view Scene Relighting

Given multiple views of a large-scale outdoor scene captured under uncontrolled natural illumination, Philip et al.[philip2019multiviewrelighting] can render the scene under novel outdoor lighting (parameterized by the sun position and cloudiness level).The input views are used to reconstruct the 3D geometry of the scene; this geometry is coarse and erroneous and directly relighting it would produce poor results. Instead, the authors propose using this geometry to construct intermediate buffers—normals, reflection features, and RGB shadow maps—as auxiliary inputs to guide a neural network-based relighting method. The method also uses a shadow refinement network to improve the removal and addition of shadows that are an important cue in outdoor images. While the entire method is trained on a synthetically rendered dataset, it generalizes to real scenes, producing high-quality results for applications like the creation of time-lapse effects from multiple (or single) images and relighting scenes in traditional image-based rendering pipelines.

6.4.3 Deep Reflectance Fields

Deep Reflectance Fields [meka2019deep] presents a novel technique to relight images of human faces by learning a model of facial reflectance from a database of 4D reflectance field data of several subjects in a variety of expressions and viewpoints. Using a learned model, a face can be relit in arbitrary illumination environments using only two original images recorded under spherical color gradient illumination [Fyffe:2009]. The high-quality results of the method indicate that the color gradient images contain the information needed to estimate the full 4D reflectance field, including specular reflections and high frequency details. While capturing images under spherical color gradient illumination still requires a special lighting setup, reducing the capture requirements to just two illumination conditions, as compared to previous methods that require hundreds of images [lightstage1], allows the technique to be applied to dynamic facial performance capture (Figure 10).

Figure 10: Meka et al. [meka2019deep] decompose the full reflectance fields by training a convolutional neural network that maps two spherical gradient images to any one-light-at-a-time image. Images taken from Meka et al. [meka2019deep].

6.4.4 Single Image Portrait Relighting

A particularly useful application of relighting methods is to change the lighting of a portrait image captured in the wild, i.e., with off-the-shelf (possibly cellphone) cameras under natural unconstrained lighting. While the input in this case is only a single (uncontrolled) image, recent methods have demonstrated state-of-the-art results using deep neural networks [sun2019single, zhou2019portraitrelighting]. The relighting model in these methods consists of a deep neural network that has been trained to take a single RGB image as input and produce as output a relit version of the portrait image under an arbitrary user-specified environment map. Additionally, the model also predicts an estimation of the current lighting conditions and, in the case of Sun et al.[sun2019single], can run on mobile devices in ~160ms, see Figure 11. Sun et al. represent the target illumination as an environment map and train their network using captured reflectance field data. On the other hand, Zhou et al. [zhou2019portraitrelighting] use a spherical harmonics representation for the target lighting and train the network with a synthetic dataset created by relighting single portrait images using a traditional ratio image-based method. Instead of having an explicit inverse rendering step for estimating geometry and reflectance [barron2014shape, shu2017neural, sengupta2018sfsnet], these methods directly regress to the final relit image from an input image and a “target” illumination. In doing so, they bypass restrictive assumptions like Lambertian reflectance and low-dimensional shape spaces that are made in traditional face relighting methods, and are able to generalize to full portrait image relighting including hair and accessories.

Figure 11: Given a single portrait image captured with a standard camera, portrait relighting methods [sun2019single] can generate images of the subject under novel lighting environments. Images taken from Sun et al. [sun2019single].

6.5 Facial Reenactment

Facial reenactment aims to modify scene properties beyond those of viewpoint (Section 6.3) and lighting (Section 6.4), for example by generating new head pose motion, facial expressions, or speech. Early methods were based on classical computer graphics techniques. While some of these approaches only allow implicit control, i.e., retargeting facial expressions from a source to a target sequence [thies2016face], explicit control has also been explored [blanz2003reanimating]. These approaches usually involve reconstruction of a 3D face model from the input, followed by editing and rendering of the model to synthesize the edited result. Neural rendering techniques overcome the limitations of classical approaches by better dealing with inaccurate 3D reconstruction and tracking, as well as better photorealistic appearance rendering. Early neural rendering approaches, such as that of Kim et al. [Kim:2018:DVP:3197517.3201283], use a conditional GAN to refine the outputs estimated by classical methods. In addition to more photo-realistic results compared to classical techniques, neural rendering methods allow for the control of head pose in addition to facial expressions [Kim:2018:DVP:3197517.3201283, zakharov2019few, nagano2018pagan, Wiles18]. Most neural rendering approaches for facial reenactment are trained separately for each identity. Only recently, methods which generalize over multiple identities have been explored [zakharov2019few, Wiles18, nagano2018pagan, Geng:2018:WGS:3272127.3275043].

6.5.1 Deep Video Portraits

Deep Video Portraits [Kim:2018:DVP:3197517.3201283] is a system for full head reenactment of portrait videos. The head pose, facial expressions and eye motions of the person in a video are transferred from another reference video. A facial performance capture method is used to compute 3D face reconstructions for both reference and target videos. This reconstruction is represented using a low-dimensional semantic representation which includes identity, expression, pose, eye motion, and illumination parameters. Then, a rendering-to-video translation network, based on U-Nets, is trained to convert classical computer graphics renderings of the 3D models to photo-realistic images. The network adds photo-realistic details on top of the imperfect face renderings, in addition to completing the scene by adding hair, body, and background. The training data consists of pairs of training frames, and their corresponding 3D reconstructions. Training is identity and scene specific, with only a few minutes (typically 5-10) of training data needed. At test time, semantic dimensions which are relevant for reenactment, i.e., expressions, eye motion and rigid pose are transferred from a source to a different target 3D model. The translation network subsequently converts the new 3D sequence into a photo-realistic output sequence. Such a framework allows for interactive control of a portrait video.

6.5.2 Editing Video by Editing Text

Text-based Editing of Talking-head Video [Fried2019] takes as input a one-hour long video of a person speaking, and the transcript of that video. The editor changes the transcript in a text editor, and the system synthesizes a new video in which the speaker appears to be speaking the revised transcript (Figure 12). The system supports cut, copy and paste operations, and is also able to generate new words that were never spoken in the input video.

Figure 12: Text-based Editing of Talking-head Video [Fried2019]. An editor changes the text transcript to create a new video in which the subject appears to be saying the new phrase. In each pair, left: composites containing real pixels, rendered pixels and transition regions (in black); right: photo-realistic neural-rendering results. Images taken from Fried et al. [Fried2019].

The first part of the pipeline is not learning-based. Given a new phrase to synthesize, the system finds snippets in the original input video that, if combined, will appear to be saying the new phrase. To combine the snippets, a parameterized head model is used, similarly to Deep Video Portraits [Kim:2018:DVP:3197517.3201283]. Each video frame is converted to a low-dimensional representation, in which expression parameters (i.e. what the person is saying and how they are saying it) are decoupled from all other properties of the scene (e.g. head pose and global illumination). The snippets and parameterized model are then used to synthesize a low-fidelity render of the desired output. Neural rendering is used to convert the low-fidelity render into a photo-realistic frame. A GAN-based encoder-decoder, again similar to Deep Video Portraits, is trained to add high frequency details (e.g., skin pores) and hole-fill to produce the final result. The neural network is person specific, learning a person’s appearance variation in a given environment. In contrast to Kim et al. [Kim:2018:DVP:3197517.3201283], this network can deal with dynamic backgrounds.

6.5.3 Image Synthesis using Neural Textures

Deferred Neural Rendering [Thies:2019:DNR:3306346.3323035] enables novel-view point synthesis as well as scene-editing in 3D (geometry deformation, removal, copy-move). It is trained for a specific scene or object. Besides ground truth color images, it requires a coarse reconstructed and tracked 3D mesh including a texture parametrization. Instead of a classical texture as used by Kim et al. [Kim:2018:DVP:3197517.3201283], the approach learns a neural texture, a texture that contains neural feature descriptors per surface point. A classical computer graphics rasterizer is used to sample from these neural textures, given the 3D geometry and view-point, resulting in a projection of the neural feature descriptors onto the image plane. The final output image is generated from the rendered feature descriptors using a small U-Net, which is trained in conjunction with the neural texture. The paper shows several applications based on color video inputs, including novel-view point synthesis, scene editing and animation synthesis of portrait videos. The learned neural feature descriptors and the decoder network compensate for the coarseness of the underlying geometry, as well as for tracking errors, while the classical rendering step ensures 3D-consistent image formation. Similar to the usage of neural textures on meshes, Aliev et al. [Aliev2019] propose to use vertex-located feature descriptors and point based rendering to project these to the image plane. Given the splatted features as input, a U-Net architecture is used for image generation. They show results on objects and room scenes.

6.5.4 Neural Talking Head Models

The facial reenactment approaches we have discussed so far [Kim:2018:DVP:3197517.3201283, Fried2019, Thies:2019:DNR:3306346.3323035] are person-specific, i.e., a different network has to be trained for each identity. In contrast, a generalized face reenactment approach was proposed by Zakharov et al. [zakharov2019few]. The authors train a common network to control faces of any identity using sparse 2D keypoints. The network consists of an embedder network to extract pose-independent identity information. The output of this network is then fed to the generator network, which learns to transform given input keypoints into photo-realistic frames of the person. A large video dataset [Chung18b] consisting of talking videos of a large number of identities is used to train the network. At test time, few-shot learning is used to refine the network for an unseen identity, similar to Liu et al. [liu2019few]. While the approach allows for control over unseen identities, it does not allow for explicit 3D control of scene parameters such as pose and expressions. It needs a reference video to extract the keypoints used as input for the network.

6.5.5 Deep Appearance Models

Deep Appearance Models [Lombardi:2018:DAM:3197517.3201401] model facial geometry and appearance with a conditional variational autoencoder. The VAE compresses input mesh vertices and a texture map into a small latent encoding of facial expression. Importantly, this VAE is conditioned on the viewpoint of the camera used to render the face. This enables the decoder network to correct geometric tracking errors by decoding a texture map that reprojects to the correct place in the rendered image. The result is that the method produces high-quality, high-resolution viewpoint-dependent renderings of the face that runs at 90Hz in virtual reality. The second part of this method is a system for animating the learned face model from cameras mounted on a virtual reality headset. Two cameras are placed inside the headset looking at the eyes and one is mounted at the bottom looking at the mouth. To generate correspondence between the VR headset camera images and the multi-view capture stage images, the multi-view capture stage images are re-rendered using image-based rendering from the point-of-view of the VR headset cameras, and a single conditional variational autoencoder is used to learn a common encoding of the multi-view capture images and VR headset images, which can be regressed to the latent facial code learned in the first part. Wei et al. [Wei:2019:VFA:3306346.3323030] improve the facial animation system by solving for the latent facial code that decodes an avatar such that, when rendered from the perspective of the headset, most resembles a set of VR headset camera images, see Figure 13. To make this possible, the method uses a “training” headset, that includes an additional 6 cameras looking at the eyes and mouth, to better condition the analysis-by-synthesis formulation. The domain gap between rendered avatars and headset images is closed by using unsupervised image-to-image translation techniques. This system is able to more precisely match lip shapes and better reproduce complex facial expressions.

Figure 13: Correspondence found by the system of Wei et al. [Wei:2019:VFA:3306346.3323030]. Cameras placed inside a virtual reality head-mounted display (HMD) produce a set of infrared images. Inverse rendering is used to find the latent code of a Deep Appearance Model [Lombardi:2018:DAM:3197517.3201401] corresponding to the images from the HMD, enabling a full-face image to be rendered. Images taken from Wei et al. [Wei:2019:VFA:3306346.3323030].

While neural rendering approaches for facial reenactment achieve impressive results, many challenges still remain to be solved. Full head reenactment, including control over the head pose, is very challenging in dynamic environments. Many of the methods discussed do not preserve high-frequency details in a temporally coherent manner. In addition, photorealistic synthesis and editing of hair motion and mouth interior including tongue motion is challenging. Generalization across different identities without any degradation in quality is still an open problem.

6.6 Body Reenactment

Figure 14:

Neural pose-guided image and video generation enables the control of the position, rotation, and body pose of a person in a target video. For human performance cloning the motion information is extracted from a source video clip (left). Interactive user control is possible by modifying the underlying skeleton model (right).

Images taken from Liu et al. [LiuBody2018].

Neural pose-guided image [SiaroSLS2017, ma17, Esser2018] and video [LiuBody2018, Chan2018, AbermanSLLCC19] generation enables the control of the position, rotation, and body pose of a person in a target image/video (see Figure 14). The problem of generating realistic images of the full human body is challenging, due to the large non-linear motion space of humans. Full body performance cloning approaches [LiuBody2018, Chan2018, AbermanSLLCC19] transfer the motion in a source video to a target video. These approaches are commonly trained in a person-specific manner, i.e., they require a several-minutes-long video (often with a static background) as training data for each new target person. In the first step, the motion of the target is reconstructed based on sparse or dense human performance capture techniques. This is required to obtain the paired training corpus (pose and corresponding output image) for supervised training of the underlying neural rendering approach. Current approaches cast the problem of performance cloning as learning a conditional generative mapping based on image-to-image translation networks. The inputs are either joint heatmaps [AbermanSLLCC19], the rendered skeleton model [Chan2018], or a rendered mesh of the human [LiuBody2018]. The approach of Chan et al. [Chan2018] predicts two consecutive frames of the output video and employs a space-time discriminator for more temporal coherence. For better generalization, the approach of Aberman et al. [AbermanSLLCC19] employs a network with two separate branches that is trained in a hybrid manner based on a mixed training corpus of paired and unpaired data. The paired branch employs paired training data extracted from a reference video to directly supervise image generation based on a reconstruction loss. The unpaired branch is trained with unpaired data based on an adversarial identity loss and a temporal coherence loss. Textured Neural Avatars [Shysheya2019TexturedNA] predict dense texture coordinates based on rendered skeletons to sample a learnable, but static, RGB texture. Thus, they remove the need of an explicit geometry at training and test time by mapping multiple 2D views to a common texture map. Effectively, this maps a 3D geometry into a global 2D space that is used to re-render an arbitrary view at test-time using a deep network. The system is able to infer novel viewpoints by conditioning the network on the desired 3D pose of the subject. Besides texture coordinates, the approach also predicts a foreground mask of the body. To ensure convergence, the authors prove the need of a pre-trained DensePose model [dense_pose] to initialize the common texture map, at the same time they show how their training procedure improves the accuracy of the 2D correspondences and sharpens the texture map by recovering high frequency details. Given new skeleton input images they can also drive the learned pipeline. This method shows consistently improved generalization compared to standard image-to-image translation approaches. On the other hand, the network is trained per-subject and cannot easily generalize to unseen scales. Since the problem of human performance cloning is highly challenging, none of the existing methods obtain artifact-free results. Remaining artifacts range from incoherently moving surface detail to partly missing limbs.

7 Open Challenges

As this survey shows, significant progress on neural rendering has been made over the last few years and it had a high impact on a vast number of application domains. Nevertheless, we are still just at the beginning of this novel paradigm of learned image-generation approaches, which leaves us with many open challenges, but also incredible opportunities for further advancing this field. In the following, we describe open research problems and suggest next steps.

Generalization. Many of the first neural rendering approaches have been based on overfitting to a small set of images or a particular video depicting a single person, object, or scene. This is the best case scenario for a learning based approach, since the variation that has to be learned is limited, but it also restricts generalization capabilities. In a way, these approaches learn to interpolate between the training examples. As is true for any machine learning approach, they might fail if tested on input that is outside the span of the training samples. For example, learned reenactment approaches might fail for unseen poses [Kim:2018:DVP:3197517.3201283, LiuBody2018, Chan2018, AbermanSLLCC19]

. Nevertheless, the neural rendering paradigm already empowers many applications in which the data distribution at test and training time is similar. One solution to achieve better generalization is to explicitly add the failure cases to the training corpus, but this comes at the expense of network capacity and all failures cases might not be known a priori. Moreover, if many of the scene parameters have to be controlled, the curse of dimensionality makes capturing all potential scenarios infeasible. Even worse, if a solution should work for arbitrary people, we cannot realistically gather training data for all potential users, and even if we could it is unclear whether such training will be successful. Thus, one of the grand challenges for the future is true generalization to unseen settings. For examples, first successful steps have been taken to generalize 3D-structured neural scene representations

[sitzmann2019srns, hologan, nguyen2018rendernet] across object categories. One possibility to improve generalization is to explicitly build a physically inspired inductive bias into the network. Such an inductive bias can for example be a differentiable camera model or an explicit 3D-structured latent space. This analytically enforces a truth about the world in the network structure and frees up network capacity. Together, this enables better generalization, especially if only limited training data is available. Another interesting direction is to explore how additional information at test time can be employed to improve generalization, e.g., a set of calibration images [Pandey_2019_CVPR] or a memory bank.


So far, a lot of the effort has focused on very specific applications that are constrained in the complexity and size of the scenes they can handle. For example, work on (re-)rendering faces has primarily focused on processing a single person in a short video clip. Similarly, neural scene representations have been successful in representing individual objects or small environments of limited complexity. While network generalization may be able to address a larger diversity of objects or simple scenes, scalability is additionally needed to successfully process complex, cluttered, and large scenes, for example to enable dynamic crowds, city- or global-scale scenes to be efficiently processed. Part of such an effort is certainly software engineering and improved use of available computational resources, but one other possible direction that could allow neural rendering techniques to scale is to let the network reason about compositionality. A complex scene can be understood as the sum of its parts. For a network to efficiently model this intuition, it has to be able to segment a scene into objects, understand local coordinate systems, and robustly process observations with partial occlusions or missing parts. Yet, compositionality is just one step towards scalable neural rendering techniques and other improvements in neural network architectures and steps towards unsupervised learning strategies have to be developed.

Editability. Traditional computer graphics pipelines are not only optimized for modeling and rendering capabilities, but they also allow all aspects of a scene to be edited either manually or through simulation. Neural rendering approaches today do not always offer this flexibility. Those techniques that combine learned parameters with traditional parts of the pipeline, such as neural textures, certainly allow the traditional part (i.e., the mesh) to be edited but it is not always intuitive how to edit the learned parameters (i.e., the neural texture). Achieving an intuitive way to edit abstract feature-based representations does not seem straightforward, but it is certainly worth considering how to set up neural rendering architectures to allow artists to edit as many parts of the pipeline as possible. Moreover, it is important to understand and reason about the network output as well. Even if explicit control may not be available in some cases, it may be useful to reason about failure cases.

Multimodal Neural Scene Representations. This report primarily focuses on rendering applications and, as such, most of the applications we discuss revolve around using images and videos as inputs and outputs to a network. A few of these applications also incorporate sound, for example to enable lip synchronization in a video clip when the audio is edited [Fried2019]. Yet, a network that uses both visual and audio as input may learn useful ways to process the additional input modalities. Similarly, immersive virtual and augmented reality experiences and other applications may demand multimodal output of a neural rendering algorithm that incorporates spatial audio, tactile and haptic experiences, or perhaps olfactory signals. Extending neural rendering techniques to include other senses could be a fruitful direction of future research.

8 Social Implications

In this paper, we present a multitude of neural rendering approaches, with various applications and target domains. While some applications are mostly irreproachable, others, while having legitimate and extremely useful use cases, can also be used in a nefarious manner (e.g., talking-head synthesis). Methods for image and video manipulation are as old as the media themselves, and are common, for example, in the movie industry. However, neural rendering approaches have the potential to lower the barrier for entry, making manipulation technology accessible to non-experts with limited resources. While we believe that all the methods discussed in this paper were developed with the best of intentions, and indeed have the potential to positively influence the world via better communication, content creation and storytelling, we must not be complacent. It is important to proactively discuss and devise a plan to limit misuse. We believe it is critical that synthesized images and videos clearly present themselves as synthetic. We also believe that it is essential to obtain permission from the content owner and/or performers for any alteration before sharing a resulting video. Also, it is important that we as a community continue to develop forensics, fingerprinting and verification techniques (digital and non-digital) to identify manipulated video (Section 8.1

). Such safeguarding measures would reduce the potential for misuse while allowing creative uses of video editing technologies. Researchers must also employ responsible disclosure when appropriate, carefully considering how and to whom a new system is released. In one recent example in the field of natural language processing

[radford2019language], the authors adopted a “staged release” approach [solaiman2019release], refraining from releasing the full model immediately, instead releasing increasingly more powerful versions of the implementation over a full year. The authors also partnered with security researchers and policymakers, granting early access to the full model. Learning from this example, we believe researchers must make disclosure strategies a key part of any system with a potential for misuse, and not an afterthought. We hope that repeated demonstrations of image and video synthesis will teach people to think more critically about the media they consume, especially if there is no proof of origin. We also hope that publication of the details of such systems can spread awareness and knowledge regarding their inner workings, sparking and enabling associated research into the aforementioned forgery detection, watermarking and verification systems. Finally, we believe that a robust public conversation is necessary to create a set of appropriate regulations and laws that would balance the risks of misuse of these tools against the importance of creative, consensual use cases. For an in-depth analysis of security and safety considerations of AI systems, we refer the reader to [brundage2018malicious]. While most measures described in this section involve law, policy and educational efforts, one measure — media forensics — is a technical challenge, as we describe next.

8.1 Forgery Detection

Integrity of digital content is of paramount importance nowadays. The verification of the integrity of an image can be done using a pro-active protection method, like digital signatures and watermarking, or a passive forensic analysis. An interesting concept is the ‘Secure Digital Camera’ [Blythe2004SecureDC] which not only introduces a watermark but also stores a biometric identifier of the person who took the photograph. While watermarking for forensic applications is explored in the literature, camera manufactures have so far failed to implement such methods in camera hardware [Blythe2004SecureDC, Yan2017MultiScaleDM, Korus2015TowardsPS, Korus2018ContentAF]. Thus, automatic passive detection of synthetic or manipulated imagery gains more and more importance. There is a large corpus of digital media forensic literature which splits up in manipulation-specific and manipulation-independent methods. Manipulation-specific detection methods learn to detect the artifacts produced by a specific manipulation method. FaceForensics++ [roessler2019faceforensics++] offers a large-scale dataset of different image synthesis and manipulation methods, suited to train deep neural networks in a supervised fashion. It is now the largest forensics dataset for detecting facial manipulations with over 4 million images. In addition, they show that they can train state-of-the-art neural networks to achieve high detection rates even under different level of image compression. Similar, Wang et al. [wang2019detecting] scripted photoshop to later detect photoshopped faces. The disadvantage of such manipulation-specific detection methods is the need of a large-scale training corpus per manipulation method. In ForensicTransfer [cozzolino2018forensictransfer], the authors propose a few-shot learning approach. Based on a few samples of a previously unseen manipulation method, high detection rates can be achieved even without a large (labeled) training corpus. In the scenario where no knowledge or samples of a manipulation method (i.e., “in the wild” manipulations) are available, manipulation-independent methods are required. These approaches concentrate on image plausibility. Physical and statistical information have to be consistent all over the image or video (e.g., shadows or JPEG compression). This consistency can for example be determined in a patch-based fashion [Cozzolino2018NoiseprintAC, huh18forensics], where one patch is compared to another part of the image. Using such a strategy, a detector can be trained on real data only using patches of different images as negative samples.

9 Conclusion

Neural rendering has raised a lot of interest in the past few years. This state-of-the-art report reflects the immense increase of research in this field. It is not bound to a specific application but spans a variety of use-cases that range from novel-view synthesis, semantic image editing, free viewpoint videos, relighting, face and body reenactment to digital avatars. Neural rendering has already enabled applications that were previously intractable, such as rendering of digital avatars without any manual modeling. We believe that neural rendering will have a profound impact in making complex photo and video editing tasks accessible to a much broader audience. We hope that this survey will introduce neural rendering to a large research community, which in turn will help to develop the next generation of neural rendering and graphics applications.

Acknowledgements G.W. was supported by an Okawa Research Grant, a Sloan Fellowship, NSF Awards IIS 1553333 and CMMI 1839974, and a PECASE by the ARL. V.S. was supported by a Stanford Graduate Fellowship. C.T. was supported by the ERC Consolidator Grant 4DRepLy (770784). M.N. was supported by Google, Sony, a TUM-IAS Rudolf Mößbauer Fellowship, the ERC Starting Grant Scan2CAD (804724), and a Google Faculty Award. M.A. and O.F. were supported by the Brown Institute for Media Innovation. We thank David Bau, Richard Zhang, Taesung Park, and Phillip Isola for proofreading.