FastNeRF: High-Fidelity Neural Rendering at 200FPS

03/18/2021 ∙ by Stephan J. Garbin, et al. ∙ 15

Recent work on Neural Radiance Fields (NeRF) showed how neural networks can be used to encode complex 3D environments that can be rendered photorealistically from novel viewpoints. Rendering these images is very computationally demanding and recent improvements are still a long way from enabling interactive rates, even on high-end hardware. Motivated by scenarios on mobile and mixed reality devices, we propose FastNeRF, the first NeRF-based system capable of rendering high fidelity photorealistic images at 200Hz on a high-end consumer GPU. The core of our method is a graphics-inspired factorization that allows for (i) compactly caching a deep radiance map at each position in space, (ii) efficiently querying that map using ray directions to estimate the pixel values in the rendered image. Extensive experiments show that the proposed method is 3000 times faster than the original NeRF algorithm and at least an order of magnitude faster than existing work on accelerating NeRF, while maintaining visual quality and extensibility.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 6

page 7

page 8

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Rendering scenes in real-time at photorealistic quality has long been a goal of computer graphics. Traditional approaches such as rasterization and ray-tracing often require significant manual effort in designing or pre-processing the scene in order to achieve both quality and speed. Recently, neural rendering [10, 26, 17, 24, 19] has offered a disruptive alternative: involve a neural network in the rendering pipeline to output either images directly [25, 18, 26] or to model implicit functions that represent a scene appropriately [5, 23, 28, 24, 39]. Beyond rendering, some of these approaches implicitly reconstruct a scene from static or moving cameras [24, 35, 1]

, thereby greatly simplifying the traditional reconstruction pipelines used in computer vision.

One of the most prominent recent advances in neural rendering is Neural Radiance Fields (NeRF) [24] which, given a handful of images of a static scene, learns an implicit volumetric representation of the scene that can be rendered from novel viewpoints. The rendered images are of high quality and correctly retain thin structures, view-dependent effects, and partially-transparent surfaces. NeRF has inspired significant follow-up work that has addressed some of its limitations, notably extensions to dynamic scenes [30, 7, 45], relighting [2, 3, 36], and incorporation of uncertainty [22].

One common challenge to all of the NeRF-based approaches is their high computational requirements for rendering images. The core of this challenge resides in NeRF’s volumetric scene representation. More than 100 neural network calls are required to render a single image pixel, which translates into several seconds being required to render low-resolution images on high-end GPUs. Recent explorations

[34, 15, 27, 16] conducted with the aim of improving NeRF’s computational requirements reduced the render time by up to 50. While impressive, these advances are still a long way from enabling real-time rendering on consumer-grade hardware. Our work bridges this gap while maintaining quality, thereby opening up a wide range of new applications for neural rendering. Furthermore, our method could form the fundamental building block for neural rendering at high resolutions.

To achieve this goal, we use caching to trade memory for computational efficiency. As NeRF is fundamentally a function of positions and ray directions to color (RGB) and a scalar density , a naïve approach would be to build a cached representation of this function in the space of its input parameters. Since only depends on , it can be cached using existing methodologies. The color , however, is a function of both ray direction and position . If this 5 dimensional space were to be discretized with 512 bins per dimension, a cache of around 176 terabytes of memory would be required – dramatically more than is available on current consumer hardware.

Ideally, we would treat directions and positions separately and thus avoid the polynomial explosion in required cache space. Fortunately this problem is not unique to NeRF; the rendering equation (modulo the wavelength of light and time) is also a function of

and solving it efficiently is the primary challenge of real-time computer graphics. As such there is a large body of research which investigates ways of approximating this function as well as efficient means of evaluating the integral. One of the fastest approximations of the rendering equation involves the use of spherical harmonics such that the integral results in a dot product of the harmonic coefficients of the material model and lighting model. Inspired by this efficient approximation, we factorize the problem into two independent functions whose outputs are combined using their inner product to produce RGB values. The first function takes the form of a Multi-Layer Perceptron (MLP) that is conditioned on position in space and returns a compact deep radiance map parameterized by

components. The second function is an MLP conditioned on ray direction that produces weights for the deep radiance map components.

This factorized architecture, which we call FastNeRF, allows for independently caching the position-dependent and ray direction-dependent outputs. Assuming that and denote the number of bins for positions and ray directions respectively, caching NeRF would have a memory complexity of . In contrast, caching FastNeRF would have a complexity of . As a result of this reduced memory complexity, FastNeRF can be cached in the memory of a high-end consumer GPU, thus enabling very fast function lookup times that in turn lead to a dramatic increase in test-time performance.

While caching does consume a significant amount of memory, it is worth noting that current implementations of NeRF also have large memory requirements. A single forward pass of NeRF requires performing hundreds of forward passes through an eight layer hidden unit MLP per pixel. If pixels are processed in parallel for efficiency this consumes large amounts of memory, even at moderate resolutions. Since many natural scenes (a living room, a garden) are sparse, we are able to store our cache sparsely. In some cases this can make our method actually more memory efficient than NeRF.

In summary, our main contributions are:

  • The first NeRF-based system capable of rendering photorealistic novel views at FPS, thousands of times faster than NeRF.

  • A graphics-inspired factorization that can be compactly cached and subsequently queried to compute the pixel values in the rendered image.

  • A blueprint detailing how the proposed factorization can efficiently run on the GPU.

2 Related work

FastNeRF belongs to the family of Neural Radiance Fields methods [24] and is trained to learn an implicit, compressed deep radiance map parameterized by position and view direction that provides color and density estimates. Our method differs from [24] in the structure of the implicit model, changing it in such a way that positional and directional components can be discretized and stored in sparse 3D grids.

This also differentiates our method from models that use a discretized grid at training time, such as Neural Volumes [18] or Deep Reflectance Volumes [3]. Due to the memory requirements associated with generating and holding 3D volumes at training time, the output image resolution of [18, 3] is limited by the maximum volume size of . In contrast, our method uses volumes as large as .

Subsequent to [18, 3], the problem of parameter estimation for an MLP with low dimensional input coordinates was addressed using Fourier feature encodings. After use in [40], this was popularized in [24, 48] and explored in greater detail by [39].

Neural Radiance Fields (NeRF) [24] first showed convincing compression of a light field utilizing Fourier features. Using an entirely implicit model, NeRF is not bound to any voxelized grid, but only to a specific domain. Despite impressive results, one disadvantage of this method is that a complex MLP has to be called for every sample along each ray, meaning hundreds of MLP invocations per image pixel.

One way to speed up NeRF’s inference is to scale to multiple processors. This works particularly well because every pixel in NeRF can be computed independently. For an pixel image, JaxNeRF [6] achieves an inference speed of 20.77 seconds on an Nvidia Tesla V100 GPU, 2.65 seconds on 8

V100 GPUS, and 0.35 seconds on 128 second generation Tensor Processing Units. Similarly, the volumetric integration domain can be split and separate models used for each part. This approach is taken in the Decomposed Radiance Fields method

[34], where an integration domain is divided into Voronoi cells. This yields better test scores, and a speedup of up to .

A different way to increase efficiency is to realize that natural scenes tend to be volumetrically sparse. Thus, efficiency can be gained by skipping empty regions of space. This amounts to importance sampling of the occupancy distribution of the integration domain. One approach is to use a voxel grid in combination with the implicit function learned by the MLP, as proposed in Neural Sparse Voxel Fields (NSVFs) [16] and Neural Geometric Level of Detail [37] (for SDFs only), where a dynamically constructed sparse octree is used to represent scene occupancy. As a network still has to be queried inside occupied voxels, however, NSVFs takes between 1 and 4 seconds to render an image, with decreases to PSNR at the lower end of those timings. Our method is orders of magnitude faster in a similar scenario.

Another way to represent the importance distribution is via depth prediction for each pixel. This approach is taken in [27], which is concurrent to ours and achieves roughly 15FPS for images at reduced quality or roughly half that for quality comparable to NeRF.

Orthogonal to this, AutoInt [15] showed that a neural network can be used to approximate the integrals along each ray with far fewer samples. While significantly faster than NeRF, this still does not provide interactive frame rates.

What differentiates our method from those described above is that FastNeRF’s proposed decomposition, and subsequent caching, lets us avoid calls to an MLP at inference time entirely. This makes our method faster in absolute terms even on a single machine.

It is worth noting that our method does not address training speed, as for example [38]. In that case, the authors propose improving the training speed of NeRF models by finding initialisation through meta learning.

Finally, there are orthogonal neural rendering strategies capable of fast inference, such as Neural Point-Based Graphics [11], Mixture of Volumetric Primitives [19] or Pulsar [12], which use forms of geometric primitives and rasterization. In this work, we only deal with NeRF-like implicit functional representations paired with ray tracing.

3 Method

Figure 1: Left: NeRF neural network architecture. denotes the input sample position, denotes the ray direction and are the output color and transparency values. Right: our FastNeRF architecture splits the same task into two neural networks that are amenable to caching. The position-dependent network outputs a deep radiance map consisting of components, while the outputs the weights for those components given a ray direction as input.

In this section we describe FastNeRF, a method that is 3000 times faster than the original Neural Radiance Fields (NeRF) system [24] (Section 3.1). This breakthrough allows for rendering high-resolution photorealistic images at over 200Hz on high-end consumer hardware. The core insight of our approach (Section 3.2) consists of factorizing NeRF into two neural networks: a position-dependent network that produces a deep radiance map and a direction-dependent network that produces weights. The inner product of the weights and the deep radiance map estimates the color in the scene at the specified position and as seen from the specified direction. This architecture, which we call FastNeRF, can be efficiently cached (Section 3.3), significantly improving test time efficiency whilst preserving the visual quality of NeRF. See Figure 1 for a comparison of the NeRF and FastNeRF network architectures.

3.1 Neural Radiance Fields

A Neural Radiance Field (NeRF) [24] captures a volumetric 3D representation of a scene within the weights of a neural network. NeRF’s neural network maps a 3D position and a ray direction to a color value and transparency . See the left side of Figure 1 for a diagram of .

In order to render a single image pixel, a ray is cast from the camera center, passing through that pixel and into the scene. We denote the direction of this ray as . A number of 3D positions are then sampled along the ray between its near and far bounds defined by the camera parameters. The neural network is evaluated at each position and ray direction to produce color and transparency . These intermediate outputs are then integrated as follows to produce the final pixel color :

(1)

where is the transmittance and is the distance between the samples. Since depends on ray directions, NeRF has the ability to model viewpoint-dependent effects such as specular reflections, which is one key dimension in which NeRF improves upon traditional 3D reconstruction methods.

Training a NeRF network requires a set of images of a scene as well as the extrinsic and intrinsic parameters of the cameras that captured the images. In each training iteration, a subset of pixels from the training images are chosen at random and for each pixel a 3D ray is generated. Then, a set of samples is selected along each ray and the pixel color is computed using Equation (1). The training loss is the mean squared difference between and the ground truth pixel value. For further details please refer to [24].

While NeRF renders photorealistic images, it requires calling a large number of times to produce a single image. With the default number of samples per pixel proposed in NeRF [24], nearly 400 million calls to are required to compute a single high definition (1080p) image. Moreover the intermediate outputs of this process would take hundreds of gigabytes of memory. Even on high-end consumer GPUs, this constrains the original method to be executed over several batches even for medium resolution () images, leading to additional computational overheads.

3.2 Factorized Neural Radiance Fields

Taking a step away from neural rendering for a moment, we recall that in traditional computer graphics, the rendering equation

[9] is an integral of the form

(2)

where is the radiance leaving the point in direction , is the reflectance function capturing the material properties at position , describes the amount of light reaching from direction , and corresponds to the direction of the surface normal at . Given its practical importance, evaluating this integral efficiently has been a subject of active research for over three decades [9, 41, 8, 32]. One efficient way to evaluate the rendering equation is to approximate and using spherical harmonics [4, 44]. In this case, evaluating the integral boils down to a dot product between the coefficients of both spherical harmonics approximations.

With this insight in mind, we can return to neural rendering. In the case of NeRF, can also be interpreted as returning the radiance leaving point in direction . This key insight leads us to propose a new network architecture for NeRF, which we call FastNeRF. This novel architecture consists in splitting NeRF’s neural network into two networks: one that depends only on positions and one that depends only on the ray directions . Similarly to evaluating the rendering equation using spherical harmonics, our position-dependent and direction-dependent functions produce outputs that are combined using the dot product to obtain the color value at position observed from direction . Crucially, this factorization splits a single function that takes inputs in into two functions that take inputs in and . As explained in the following section, this makes caching the outputs of the network possible and allows accelerating NeRF by 3 orders of magnitude on consumer hardware. See Figure 2 for a visual representation of the achieved speedup.

The position-dependent and direction-dependent functions of FastNeRF are defined as follows:

(3)
(4)

where are

-dimensional vectors that form a deep radiance map describing the view-dependent radiance at position

. The output of , , is a -dimensional vector of weights for the components of the deep radiance map. The inner product of the weights and the deep radiance map

(5)

results in the estimated color at position observed from direction .

When the two functions are combined using Equation (5), the resulting function has the same signature as the function used in NeRF. This effectively allows for using the FastNeRF architecture as a drop-in replacement for NeRF’s architecture at runtime.

3.3 Caching

Given the large number of samples that need to be evaluated to render a single pixel, the cost of computing dominates the total cost of NeRF rendering. Thus, to accelerate NeRF, one could attempt to reduce the test-time cost of by caching its outputs for a set of inputs that cover the space of the scene. The cache can then be evaluated at a fraction of the time it takes to compute .

For a trained NeRF model, we can define a bounding box that covers the entire scene captured by NeRF. We can then uniformly sample values for each of the 3 world-space coordinates within the bounds of . Similarly, we uniformly sample values for each of the ray direction coordinates with and . The cache is then generated by computing for each combination of sampled and .

The size of such a cache for a standard NeRF model with and densely stored 16-bit floating point values is approximately 5600 Terabytes. Even for highly sparse volumes, where one would only need to keep  1% of the outputs, the size of the cache would still severely exceed the memory capacity of consumer-grade hardware. This huge memory requirement is caused by the usage of both and as input to . A separate output needs to be saved for each combination of and , resulting in a memory complexity in the order of .

Our FastNeRF architecture makes caching feasible. For the size of two dense caches holding and would be approximately 54 GB. For moderately sparse volumes, where 30% of space is occupied, the memory requirement is low enough to fit into either the CPU or GPU memory of consumer-grade machines. In practice, the choice and depends on the scene size and the expected image resolution. For many scenarios a smaller cache of is sufficient, lowering the memory requirements further. Please see the supplementary materials for formulas used to calculate cache sizes for both network architectures.

4 Implementation

Figure 2: Speed evaluation of our method and prior work [24, 34, 16, 27] on the Lego scene from the Realistic 360 Synthetic [24] dataset, rendered at pixels. For previous methods, when numbers for the Lego scene were not available, we used an optimistic approximation.
Figure 3: Qualitative comparison of our method vs NeRF on the dataset of [24] at pixels using 8 components. Small cache refers to our method cached at , and large cache at . Varying the cache size allows for trading compute and memory for image quality resembling levels of detail (LOD) in traditional computer graphics.
Figure 4: Qualitative comparison of our method vs NeRF on the dataset of [25] at pixels using 6 factors. Small cache refers to our method cached at , and large cache at .

Aside from using the proposed FastNeRF architecture described in Section 3.2, training FastNeRF is identical to training NeRF [24]. We model and of FastNeRF using 8 and 4 layer MLPs respectively, with positional encoding applied to inputs [24]. The 8-layer MLP has hidden units, and we use for the view-predicting 4-layer network. The only exception to this is the ficus scene, where we found a MLP gave better results for the 8-layer MLP. While this makes the network larger than the baseline, cache lookup speed is not influenced by MLP complexity. Similarly to NeRF, we parameterize the view direction as a 3-dimensional unit vector. Other training-time features and settings including the coarse/fine networks and sample count follow the original NeRF work [24] (please see supplementary materials).

At test time, both our method and NeRF take a set of camera parameters as input which are used to generate a ray for each pixel in the output. A number of samples are then produced along each ray and integrated following Section 3.1. While FastNeRF can be executed using its neural network representation, its performance is greatly improved when cached. To further improve performance, we use hardware accelerated ray tracing [31] to skip empty space, starting to integrate points along the ray only after the first hit with a collision mesh derived from the density volume, and visiting every voxel thereafter until a ray’s transmittance is saturated. The collision mesh can be computed from a sign-distance function derived from the density volume using Marching Cubes [20]. For volumes greater than

, we first downsample the volume by a factor of two to reduce mesh complexity. We use the same meshing and integration parameters for all scenes and datasets. Value lookup uses nearest neighbour interpolation for

and trilinear sampling for , the latter of which can be processed with a Gaussian or other kernel to ‘smooth’ or edit directional effects. We note that the meshes and volumes derived from the cache can also be used to provide approximations to shadows and global illumination in a path-tracing framework.

5 Experiments

Scene NeRF Ours - No Cache Ours - Cache Speed
PSNR SSIM LPIPS PSNR SSIM LPIPS PSNR SSIM LPIPS
Nerf Synthetic dB dB dB ms
LLFF dB dB dB ms
Table 1: We compare NeRF to our method when not cached to a grid, and when cached at a high resolution in terms of PSNR, SSIM and LPIPS, and also provide the average speed in ms of our method when cached. LLFF denotes the dataset of [25] at pixels, the other dataset is that of [24] at pixels. We use components and a cache for the NeRF Synthetic 360 scenes, and 6 components at for LLFF. Please see the supplementary material for more detailed results.
Scene NeRF Ours - No Cache Speedup over NeRF
Chair -
Lego -
Horns* - -
Leaves* - -
Table 2: Speed comparison of our method vs NeRF in ms. The Chair and Lego scenes are at rendered at resolution, Horns and Leaves scenes at . Our method never drops below 100FPS when cached, and is often significantly faster. Please note that our method is slower when not cached due to using larger MLPs; it performs identically to NeRF when using hidden units. We do not compute the highest resolution cache for LLFF scenes because these are less sparse due to the use of normalized device coordinates.
Factors No Cache
PSNR Memory PSNR Memory PSNR Memory PSNR Memory PSNR Memory
4 dB - dB Gb dB Gb dB Gb dB Gb
6 dB - dB Gb dB Gb dB Gb dB Gb
8 dB - dB Gb dB Gb dB Gb dB Gb
16 dB - dB Gb dB Gb dB Gb dB Gb
Table 3: Influence of the number of components and grid resolution on PSNR and memory required for caching the ship scene. Note how more factors increase grid sparsity. We find that 8 or 6 components are a reasonable compromise in practice.

We evaluate our method quantitatively and qualitatively on the Realistic 360 Synthetic [24] and Local Light Field Fusion (LLFF) [25] (with additions from [24]) datasets used in the original NeRF paper. While the NeRF synthetic dataset consists of 360 degree views of complex objects, the LLFF dataset consist of forward-facing scenes, with fewer images. In all comparisons with NeRF we use the same training parameters as described in the original paper [24].

To assess the rendering quality of FastNeRF we compare its outputs to GT using Peak Signal to Noise Ratio (PSNR), Structural Similarity (SSIM)

[42] and perceptual LPIPS [47]. We use the same metrics in our ablation study, where we evaluate the effects of changing the number of components and the size of the cache. All speed comparisons are run on one machine that is equipped with an Nvidia RTX 3090 GPU.

We keep the size of the view-dependent cache the same for all experiments at a base resolution of . It is the base resolution of the RGBD cache that represents the main trade-off in complexity of runtime and memory vs image quality. Across all results, grid resolution refers to the number of divisions of the longest side of the bounding volume surrounding the object of interest.

Rendering quality: Because we adopt the inputs, outputs, and training of NeRF, we retain the compatibility with ray-tracing [43, 9] and the ability to only specify loose volume bounds. At the same time, our factorization and caching do not affect the character of the rendered images to a large degree. Please see Figure 3 and Figure 4 for qualitative comparisons, and Table 1 for a quantitative evaluation. Note that some aliasing artefacts can appear since our model is cached to a grid after training. This explains the decrease across all metrics as a function of the grid resolution as seen in Table 1. However, both NeRF and FastNeRF mitigate multi-view inconsistencies, ghosting and other such artifacts from multi-view reconstruction. Table 1 demonstrates that at high enough cache resolutions, our method is capable of the same visual quality as NeRF. Figure 3 and Figure 4 further show that smaller caches appear ‘pixelated’ but retain the overall visual characteristics of the scene. This is similar to how different levels of detail are used in computer graphics [21], and an important innovation on the path towards neurally rendered worlds.

Cache Resolution: As shown in Table 1, our method matches or outperforms NeRF on the dataset of synthetic objects at cache resolution. Because our method is fast enough to visit every voxel (as opposed to the fixed sample count used in NeRF), it can sometimes achieve better scores by not missing any detail. We observe that a cache of is a good trade-off between perceptual quality, memory and rendering speed for the synthetic dataset. For the LLFF dataset, we found a cache to work best. For the Ship scene, an ablation over the grid size, memory, and PSNR is shown in Table 3.

For the view-filling LLFF scenes at pixels, our method sees a slight decrease in metrics when cached at , but still produces qualitatively compelling results as shown in Figure 4, where intricate detail is clearly preserved.

Rendering Speed: When using a grid resolution of , FastNeRF is on average more than faster than NeRF at largely the same perceptual quality. See Table 2 for a breakdown of run-times across several resolutions of the cache and Figure 2 for a comparison to other NeRF extensions in terms of speed. Note that we log the time it takes our method to fill an RGBA buffer of the image size with the final pixel values, whereas the baseline implementation needs to perform various steps to reshape samples back into images. Our CUDA kernels are not highly optimized, favoring flexibility over maximum performance. It is reasonable to assume that advanced code optimization and further compression of the grid values could lead to further reductions in compute time.

Number of Components: While we can see a theoretical improvement of roughly dB going from to components, we find that or components are sufficient for most scenes. As shown in Table 3, our cache can bring out fine details that are lost when only a fixed sample count is used, compensating for the difference. Table 3 also shows that more components tend to be increasingly sparse when cached, which compensates somewhat for the additional memory usage. Please see the supplementary material for more detailed results on other scenes.

6 Application

Figure 5: Face images rendered using FastNeRF combined with a deformation field network [29]. Thanks to the use of FastNeRF, expression-conditioned images can be rendered at 30FPS.

We demonstrate a proof-of-concept application of FastNeRF for a telepresence scenario by first gathering a dataset of a person performing facial expressions for about 20s in a multi-camera rig consisting of 32 calibrated cameras. We then fit a 3D face model similar to FLAME [13] to obtain expression parameters for each frame. Since the captured scene is not static, we train a FastNeRF model of the scene jointly with a deformation model [29] conditioned on the expression data. The deformation model takes the samples as input and outputs their updated positions that live in a canonical frame of reference modeled by FastNeRF.

We show example outputs produced using this approach in Figure 5. This method allows us to render pixel images of the face at 30 fps on a single Nvidia Tesla V100 GPU – around 50 times faster than a setup with a NeRF model. At the resolution we achieve, the face expression is clearly visible and the high frame rate allows for real-time rendering, opening the door to telepresence scenarios. The main limitation in terms of speed and image resolution is the deformation model, which needs to be executed for a large number of samples. We employ a simple pruning method, detailed in the supplementary material, to reduce the number of samples.

Finally, note that while we train this proof-of-concept method on a dataset captured using multiple cameras, the approach can be extended to use only a single camera, similarly to [7].

Since FastNeRF uses the same set of inputs and outputs as those used in NeRF, it is applicable to many of NeRF’s extensions. This allows for accelerating existing approaches for: reconstruction of dynamic scenes [29, 33, 14], single-image reconstruction [38], quality improvement [34, 46] and others [27]. With minor modifications, FastNeRF can be applied to even more methods allowing for control over illumination [2] and incorporation of uncertainty [22].

7 Conclusion

In this paper we presented FastNeRF, a novel extension to NeRF that enables the rendering of photorealistic images at 200Hz and more on consumer hardware. We achieve significant speedups over NeRF and competing methods by factorizing its function approximator, enabling a caching step that makes rendering memory-bound instead of compute-bound. This allows for the application of NeRF to real-time scenarios.

References

  • [1] M. Bemana, K. Myszkowski, H. Seidel, and T. Ritschel (2020) X-fields: implicit neural view-, light- and time-image interpolation. ACM Transactions on Graphics (Proc. SIGGRAPH Asia 2020) 39 (6). External Links: Document Cited by: §1.
  • [2] S. Bi, Z. Xu, P. Srinivasan, B. Mildenhall, K. Sunkavalli, M. Hašan, Y. Hold-Geoffroy, D. Kriegman, and R. Ramamoorthi (2020) Neural reflectance fields for appearance acquisition. External Links: 2008.03824 Cited by: §1, §6.
  • [3] S. Bi, Z. Xu, K. Sunkavalli, M. Hašan, Y. Hold-Geoffroy, D. Kriegman, and R. Ramamoorthi (2020) Deep reflectance volumes: relightable reconstructions from multi-view photometric images. External Links: 2007.09892 Cited by: §1, §2, §2.
  • [4] B. Cabral, N. Max, and R. Springmeyer (1987-08) Bidirectional reflection functions from surface bump maps. 21 (4), pp. 273–281. External Links: ISSN 0097-8930, Link, Document Cited by: §3.2.
  • [5] Z. Chen and H. Zhang (2018) Learning implicit fields for generative shape modeling. CoRR abs/1812.02822. External Links: Link, 1812.02822 Cited by: §1.
  • [6] JaxNeRF: an efficient JAX implementation of NeRF External Links: Link Cited by: §2.
  • [7] G. Gafni, J. Thies, M. Zollhöfer, and M. Nießner (2020) Dynamic neural radiance fields for monocular 4d facial avatar reconstruction. arXiv preprint arXiv:2012.03065. Cited by: §1, §6.
  • [8] I. Georgiev, J. Křivánek, T. Davidovič, and P. Slusallek (2012-11) Light transport simulation with vertex connection and merging. ACM Trans. Graph. 31 (6), pp. 192:1–192:10. External Links: ISSN 0730-0301, Link, Document Cited by: §3.2.
  • [9] J. T. Kajiya (1986-08) The rendering equation. SIGGRAPH Comput. Graph. 20 (4), pp. 143–150. External Links: ISSN 0097-8930, Link, Document Cited by: §3.2, §5.
  • [10] T. Karras, S. Laine, and T. Aila (2019)

    A style-based generator architecture for generative adversarial networks

    .
    External Links: 1812.04948 Cited by: §1.
  • [11] M. Kolos, A. Sevastopolsky, and V. Lempitsky (2020) TRANSPR: transparency ray-accumulating neural 3d scene point renderer. External Links: 2009.02819 Cited by: §2.
  • [12] C. Lassner and M. Zollhöfer (2020) Pulsar: efficient sphere-based neural rendering. External Links: 2004.07484 Cited by: §2.
  • [13] T. Li, T. Bolkart, M. J. Black, H. Li, and J. Romero (2017) Learning a model of facial shape and expression from 4d scans.. ACM Trans. Graph. 36 (6), pp. 194–1. Cited by: §6.
  • [14] Z. Li, S. Niklaus, N. Snavely, and O. Wang (2020) Neural scene flow fields for space-time view synthesis of dynamic scenes. arXiv preprint arXiv:2011.13084. Cited by: §6.
  • [15] D. B. Lindell, J. N. Martel, and G. Wetzstein (2020) AutoInt: automatic integration for fast neural volume rendering. arXiv preprint arXiv:2012.01714. Cited by: §1, §2.
  • [16] L. Liu, J. Gu, K. Z. Lin, T. Chua, and C. Theobalt (2021) Neural sparse voxel fields. External Links: 2007.11571 Cited by: §1, §2, Figure 2.
  • [17] S. Lombardi, J. M. Saragih, T. Simon, and Y. Sheikh (2018) Deep appearance models for face rendering. CoRR abs/1808.00362. External Links: Link, 1808.00362 Cited by: §1.
  • [18] S. Lombardi, T. Simon, J. Saragih, G. Schwartz, A. Lehrmann, and Y. Sheikh (2019-07) Neural volumes: learning dynamic renderable volumes from images. ACM Trans. Graph. 38 (4), pp. 65:1–65:14. Cited by: §1, §2, §2.
  • [19] S. Lombardi, T. Simon, G. Schwartz, M. Zollhoefer, Y. Sheikh, and J. Saragih (2021) Mixture of volumetric primitives for efficient neural rendering. External Links: 2103.01954 Cited by: §1, §2.
  • [20] W. E. Lorensen and H. E. Cline (1987-08) Marching cubes: a high resolution 3d surface construction algorithm. SIGGRAPH Comput. Graph. 21 (4), pp. 163–169. External Links: ISSN 0097-8930, Link, Document Cited by: §4.
  • [21] D. Luebke, M. Reddy, J. D. Cohen, A. Varshney, B. Watson, and R. Huebner (2002) Level of detail for 3d graphics. Morgan Kaufmann Publishers Inc., San Francisco, CA, USA. External Links: ISBN 9780080510118 Cited by: §5.
  • [22] R. Martin-Brualla, N. Radwan, M. S. M. Sajjadi, J. T. Barron, A. Dosovitskiy, and D. Duckworth (2021) NeRF in the wild: neural radiance fields for unconstrained photo collections. External Links: 2008.02268 Cited by: §1, §6.
  • [23] L. M. Mescheder, M. Oechsle, M. Niemeyer, S. Nowozin, and A. Geiger (2018) Occupancy networks: learning 3d reconstruction in function space. CoRR abs/1812.03828. External Links: Link, 1812.03828 Cited by: §1.
  • [24] B. Mildenhall, P. P. Srinivasan, M. Tancik, J. T. Barron, R. Ramamoorthi, and R. Ng (2020) Nerf: representing scenes as neural radiance fields for view synthesis. arXiv preprint arXiv:2003.08934. Cited by: §1, §1, §2, §2, §2, §3.1, §3.1, §3.1, §3, Figure 2, Figure 3, §4, Table 1, §5.
  • [25] B. Mildenhall, P. P. Srinivasan, R. Ortiz-Cayon, N. K. Kalantari, R. Ramamoorthi, R. Ng, and A. Kar (2019-07) Local light field fusion: practical view synthesis with prescriptive sampling guidelines. ACM Trans. Graph. 38 (4). External Links: ISSN 0730-0301, Link, Document Cited by: §1, Figure 4, Table 1, §5.
  • [26] O. Nalbach, E. Arabadzhiyska, D. Mehta, H. Seidel, and T. Ritschel (2017)

    Deep shading: convolutional neural networks for screen space shading

    .
    In Computer graphics forum, Vol. 36, pp. 65–78. Cited by: §1.
  • [27] T. Neff, P. Stadlbauer, M. Parger, A. Kurz, C. R. A. Chaitanya, A. Kaplanyan, and M. Steinberger (2021) DONeRF: towards real-time rendering of neural radiance fields using depth oracle networks. External Links: 2103.03231 Cited by: §1, §2, Figure 2, §6.
  • [28] J. J. Park, P. Florence, J. Straub, R. A. Newcombe, and S. Lovegrove (2019) DeepSDF: learning continuous signed distance functions for shape representation. CoRR abs/1901.05103. External Links: Link, 1901.05103 Cited by: §1.
  • [29] K. Park, U. Sinha, J. T. Barron, S. Bouaziz, D. B. Goldman, S. M. Seitz, and R. Brualla (2020) Deformable neural radiance fields. arXiv preprint arXiv:2011.12948. Cited by: Figure 5, §6, §6.
  • [30] K. Park, U. Sinha, J. T. Barron, S. Bouaziz, D. B. Goldman, S. M. Seitz, and R. Martin-Brualla (2020) Deformable neural radiance fields. arXiv preprint arXiv:2011.12948. Cited by: §1.
  • [31] S. G. Parker, J. Bigler, A. Dietrich, H. Friedrich, J. Hoberock, D. Luebke, D. McAllister, M. McGuire, K. Morley, A. Robison, et al. (2010) Optix: a general purpose ray tracing engine. Acm transactions on graphics (tog) 29 (4), pp. 1–13. Cited by: §4.
  • [32] M. Pharr, W. Jakob, and G. Humphreys (2016) Physically based rendering: from theory to implementation. Elsevier Science. External Links: ISBN 9780128007099, LCCN 2017288731, Link Cited by: §3.2.
  • [33] A. Pumarola, E. Corona, G. Pons-Moll, and F. Moreno-Noguer (2020) D-nerf: neural radiance fields for dynamic scenes. arXiv preprint arXiv:2011.13961. Cited by: §6.
  • [34] D. Rebain, W. Jiang, S. Yazdani, K. Li, K. M. Yi, and A. Tagliasacchi (2020) DeRF: decomposed radiance fields. arXiv preprint arXiv:2011.12490. Cited by: §1, §2, Figure 2, §6.
  • [35] G. Riegler and V. Koltun (2020) Stable view synthesis. External Links: 2011.07233 Cited by: §1.
  • [36] P. P. Srinivasan, B. Deng, X. Zhang, M. Tancik, B. Mildenhall, and J. T. Barron (2020) NeRV: neural reflectance and visibility fields for relighting and view synthesis. External Links: 2012.03927 Cited by: §1.
  • [37] T. Takikawa, J. Litalien, K. Yin, K. Kreis, C. Loop, D. Nowrouzezahrai, A. Jacobson, M. McGuire, and S. Fidler (2021) Neural geometric level of detail: real-time rendering with implicit 3d shapes. External Links: 2101.10994 Cited by: §2.
  • [38] M. Tancik, B. Mildenhall, T. Wang, D. Schmidt, P. P. Srinivasan, J. T. Barron, and R. Ng (2020) Learned initializations for optimizing coordinate-based neural representations. External Links: 2012.02189 Cited by: §2, §6.
  • [39] M. Tancik, P. P. Srinivasan, B. Mildenhall, S. Fridovich-Keil, N. Raghavan, U. Singhal, R. Ramamoorthi, J. T. Barron, and R. Ng (2020) Fourier features let networks learn high frequency functions in low dimensional domains. External Links: 2006.10739 Cited by: §1, §2.
  • [40] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and I. Polosukhin (2017) Attention is all you need. In Advances in Neural Information Processing Systems, I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett (Eds.), Vol. 30, pp. . External Links: Link Cited by: §2.
  • [41] E. Veach and L. J. Guibas (1995) Optimally combining sampling techniques for monte carlo rendering. In Proceedings of the 22nd Annual Conference on Computer Graphics and Interactive Techniques, SIGGRAPH ’95, New York, NY, USA, pp. 419–428. External Links: ISBN 0897917014, Link, Document Cited by: §3.2.
  • [42] Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli (2004) Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13 (4), pp. 600–612. Cited by: §5.
  • [43] T. Whitted (2005) An improved illumination model for shaded display. In ACM Siggraph 2005 Courses, pp. 4–es. Cited by: §5.
  • [44] L. Wu, G. Cai, S. Zhao, and R. Ramamoorthi (2020-07) Analytic spherical harmonic gradients for real-time rendering with many polygonal area lights. ACM Trans. Graph. 39 (4). External Links: ISSN 0730-0301, Link, Document Cited by: §3.2.
  • [45] W. Xian, J. Huang, J. Kopf, and C. Kim (2020) Space-time neural irradiance fields for free-viewpoint video. arXiv preprint arXiv:2011.12950. Cited by: §1.
  • [46] K. Zhang, G. Riegler, N. Snavely, and V. Koltun (2020) Nerf++: analyzing and improving neural radiance fields. arXiv preprint arXiv:2010.07492. Cited by: §6.
  • [47] R. Zhang, P. Isola, A. A. Efros, E. Shechtman, and O. Wang (2018)

    The unreasonable effectiveness of deep features as a perceptual metric

    .
    CoRR abs/1801.03924. External Links: Link, 1801.03924 Cited by: §5.
  • [48] E. D. Zhong, T. Bepler, J. H. Davis, and B. Berger (2020) Reconstructing continuous distributions of 3d protein structure from cryo-em images. External Links: 1909.05215 Cited by: §2.