1. Introduction
Real-time rendering of photorealistic scenes with complex effects is a demanding problem in interactive applications, such as games. At the same time, consumer machine learning accelerators are widespread from desktop GPUs to mobile vr headsets and mobile phones, making evaluation of neural networks fast and power-efficient. Recent advances in implicit neural scene representations
(Sitzmann et al., 2019b, 2020), especially in the direction of nerf (Mildenhall et al., 2020), impressively showed that machine learning can also be used for encoding and rendering of high-quality scenes and lighting effects. nerf combine a mlp network with traditional volume rendering methods to faithfully render high-quality content.In this way, neural representations can reproduce high-resolution light fields containing detailed objects and complex illumination with practical storage requirements. In this work, we focus on making neural representations more practical for interactive and real-time rendering applications. Particularly, our ultimate goal is to enable synthetic content with movie-quality effects in interactive rendering. For example, it can be a compact and efficient backdrop representation for static distant parts of a game scene, or in vr and ar scenarios, where a simple environment map does not offer required parallax to create a high quality stereo stimulus.
While nerf-like methods show significant potential for compact high-quality object and scene representations, they are too expensive to evaluate in real-time. The original nerf method (Mildenhall et al., 2020) uses about floating point parameters in two networks and requires network evaluations for a single pixel. Real-time rendering of such a representation onto a vr headset at pixel per eye with would require petaFLOPS ( network evaluations each with multiply add operations). Clearly, this is not possible on current GPU hardware and evaluation cost is a major limiting factor for neural representations to be used for real-time image generation.
In this paper, we explore how a nerf-like representation can be designed for efficient future real-time rendering. We make the following contributions:
-
We propose a dual network design to reduce evaluation costs of neural rendering of synthetic content. The first oracle network estimates depth values along rays. The second shading network places a low number of samples guided by the output of the first network. This setup achieves equal quality to the original nerf method with only – network evaluations per pixel, effectively reducing the computational cost by –.
-
We present a robust depth oracle network that utilizes the reference depth readily available in synthetic content and employs discretization of depth values along the ray and learns to solve a classification task rather than directly estimating depth values. This way, our depth oracle efficiently distributes shading samples, especially around depth discontinuities. We introduce spatial filtering of the target data and unify the input rays to improve the efficiency and robustness of depth estimates in complex scenes.
-
We introduce a non-linear transformation of depth and sampling to efficiently handle large, open scenes. Moreover, we show that sampling of the shading network should happen in a warped space to better capture high frequencies in the foreground and reduce noise in the background.
-
Combining our efforts, we demonstrate high-quality interactive neural rendering of large synthetic scenes for interactive applications.
Our work addresses an important practical problem towards adopting neural representations for real-time rendering. Besides improving the rendering efficiency, our design is also significantly faster to train, as fewer samples are needed per ray. Note that we focus on static synthetic scenes and consider dynamic scenes and animations orthogonal to our work.

2. Related work
Image-based novel view synthesis
Under the umbrella of ibr (Shum and Kang, 2000; Zhang and Chen, 2004), steady progress has been made over the years to further improve the realism of novel views generated only from few example images. Most recently, techniques using mpi (Zhou et al., 2018; Flynn et al., 2019) managed to achieve impressive results by blending together image layers that are spaced along the camera viewing direction, using a neural network to generate the mpi light field representation. In this way, the generation of novel views requires only a simple alpha blending step over all layers. While this is efficient, the main drawback of mpi is that they typically only allow for small viewing regions (Srinivasan et al., 2019), in addition to requiring densely sampled input images to assemble a high-quality mpi. By introducing systematic input image locations, blending between multiple generated mpi (Mildenhall et al., 2019), and using spherical image layers (Broxton et al., 2020)
, the potential field of view can be increased. Alternatively, given enough images captured across different view, time or illumination conditions, it is possible to train an implicit mapping between these domains onto their input images, allowing for fast and efficient interpolation
(Bemana et al., 2020).Implicit neural scene representations
While explicit neural representations based on voxels (Sitzmann et al., 2019a), mpi (Zhou et al., 2018; Flynn et al., 2019) or proxy geometry (Hedman et al., 2018) enable fast novel view generation, they are fundamentally limited by the internal resolution of their representation, which can result in blurred output for high-frequency content. Furthermore, the required memory for such representations scales poorly with spatial resolution, and is therefore not applicable for large-scale scenes. To circumvent this issue, recent work has explored the potential of implicit neural scene representations (Park et al., 2019; Sitzmann et al., 2019b) to directly infer outputs from a continuous input space, such as ray origins and directions. srn (Sitzmann et al., 2019b) directly map 3D world coordinates to a feature representation of the scene, and use a learned raymarcher to accumulate rays for the final RGB output. Similarly, neural volumes (Lombardi et al., 2019)
use raymarching to accumulate rays in a learned, warped, volumetric representation, making it possible to reconstruct high-quality images from a multiview capture. By using periodic activation functions
(Sitzmann et al., 2020), it is even possible to accurately learn gradients and laplacians, as well as further improve the quality of the output compared to widely used ReLU activations.
Neural Radiance Fields
Opening a whole new subdomain of research, Mildenhall et al. (2020) introduced nerf, which replaced the learned raymarching step from srn with a fixed, differentiable ray marching step using a constant number of samples per ray. Compared to neural volumes, this ray marching step does not traverse through a fixed-size representation, but instead all ray samples are fed into an mlp, followed by an accumulation step to generate the RGB output. Even though this is a simpler pipeline overall, they achieve state-of-the-art results for novel view generation, and showcase additional features such as the ability to reconstruct depth maps. A key ingredient of nerf being able to reconstruct high-frequency details is positional encoding
, where each ray sample is transformed into a periodic, higher dimensional space using a sine-cosine basis. While the number of frequencies used for the periodic basis is an additional hyperparameter that needs to be tuned per scene,
Tancik et al. (2020) have since shown that positional encoding can be generalized to randomized Fourier kernels.The combination of simplicity and impressive results inspired a large amount of modifications and adaptations to the original nerf framework, sometimes being referred to as the nerf explosion (Dellaert and Yen-Chen, 2021). Since its inception, nerf have been modified to capture dynamic free-viewpoint video (Park et al., 2020; Li et al., 2020; Xian et al., 2020; Pumarola et al., 2020; Du et al., 2020), generate photorealistic human avatars (Gafni et al., 2020; Gao et al., 2020), perform relighting on captured scenes (Martin-Brualla et al., 2020; Bi et al., 2020; Srinivasan et al., 2020; Boss et al., 2020), conditionally encode shape and appearance via latent codes (Schwarz et al., 2020; Chan et al., 2020; Yu et al., 2020; Trevithick and Yang, 2020) and to compose scenes of multiple objects (Ost et al., 2020; Yuan et al., 2021; Niemeyer and Geiger, 2020). The sheer amount of recent publications underlines the importance and potential of nerf-like representations.
While all of these nerf variants show impressive qualitative results, the large number of required samples leads to long training times and makes nerf unsuitable for real-time applications. As a result, several recent publications incrementally improve the training scheme of nerf, further increasing the quality and run-time efficiency. derf (Rebain et al., 2020) decompose the scene with a learnable voronoi decomposition to train multiple nerf for each cell, therefore avoiding the diminishing returns of larger network sizes and increasing run-time efficiency due to more optimal cache locality. Combining the explicit voxel-based representations of previous work with nerf-style raymarching, nsvf (Liu et al., 2020) introduced a self-pruning sparse voxel octree structure, where each ray sample includes additional information from a tri-linearly interpolated embedding of voxel vertices. This not only improves quality, but can also reduce the number of samples per ray, as only non-empty voxels are considered for raymarching. However, for scenes with high depth complexity, the ratio of foreground to background can be significant, resulting in longer evaluation times if multiple voxels are intersected by a single ray. In comparison to our work, nsvf still requires more samples, an additional data structure, and may lead to variable evaluation times depending on the current view, which can be a disadvantage for real-time applications.
Recently, Lindell et al. (2020) showed that partial integrals along a ray can also be learned, which can reduce the number of evaluations along a ray when raymarching through a nerf. However, their achieved image quality is lowered significantly at moderate performance gains.
In our work, we show the most significant performance gains compared to previous work, while also increasing image quality compared to nerf (Mildenhall et al., 2020). We do so in the context of complete neural representations, i.e., by only using two neural networks. We show that our approach is also usable for large-scale scenes, where the original nerf struggles, and techniques that require additional data structures, like nsvf (Liu et al., 2020), are difficult to establish as scene bounding boxes are overly large.
3. Efficient Neural Rendering using Depth Oracle Networks
In order to achieve real-time movie-quality rendering of compact neural representations for generated content, we adapt the mlp-based neural raymarching scheme of nerf (Mildenhall et al., 2020). We introduce DONeRF, a compact local sampling strategy that enables raymarching-based neural representations to only consider important samples around surface regions.
DONeRF consists of two networks: A depth oracle mlp that predicts optimal sample locations along the view ray (Sampling Oracle Network, Section 5), and another shading mlp which predicts RGBA output and uses nerf-like raymarching accumulation. The combination of these two networks enables efficient skipping of empty space, while inheriting the compactness and resolution independence of mlp-based neural representations such as nerf.
DONeRF consists of a five-stage pipeline (Figure 2): We transform the input rays to a unified space, given information about a supported view cell (Ray Unification, Section 5.2). We use a classification network as our depth oracle, which outputs filtered depth value likelihoods for each discretized ray segment (Classified Depth, Section 5.1). The classification network is trained using modified ground truth depth images, which are typically available for generated content. All depth inputs and outputs are transformed to logarithmic space (Non-linear Sampling, Section 4.1), and we place a low amount of samples in the vicinity of the predicted depth (Local Sampling, Section 4.2). Finally, we transform the local samples with an inverse square root transform (Space Warping, Section 4.1), before applying positional encoding and feeding them into the shading network. By efficiently compressing the depth and radiance information into two mlp, we arrive at a compact representation that is suitable for streaming high-quality generated content for vr or games.
For training, we use RGBD input images sampled from a view cell. A view cell is defined as an axis aligned bounding box with a center and size in all three dimensions, and defines the maximum allowed range for novel view generation. In a streaming setup, trained network weights for partially overlapping view cells can be swapped or interpolated to enable seamless transitions between larger sets of potential views.
4. Efficient Neural Sampling
In terms of inference performance, nerf-like neural representations scale most significantly with the number of samples per ray. While there are methods that deploy multiple lower-capacity networks for a moderate quality-speed trade-off (Rebain et al., 2020)
, the target output quality is fundamentally limited by the given network capacity. Thus, we consider network capacity optimizations orthogonal to our work.
Instead, we consider different sample placement strategies to reduce the amount of samples for neural raymarching, following two scenarios:
-
Without any knowledge of the scene, can the sample count be reduced by an alternative sample placement structure without sacrificing quality?
-
Given information about surfaces in the scene, how can samples be placed selectively to increase performance while keeping high visual quality?
To answer these questions, we run six ablation studies. For all experiments, we assume static geometry and lighting, and split each scene into slightly overlapping view cells. We vary the number of samples per ray between and only use a single mlp as our representation (without the refinement network proposed in the original nerf work which dynamically influences sample placement). We conduct our experiments on four versatile scenes, Bulldozer, Forest, Classroom and San Miguel (Section 6.1). For Bulldozer, Forest, and Classroom, we keep the positional and directional encoding frequencies at and , as suggested by Mildenhall et al. (2020). For San Miguel, we observed a need for more high-frequency details, and use positional frequencies. For more details on the evaluation setup and the evaluated test scenes, please refer to Section 6 (Evaluation). Qualitative example test images for all scenes in all ablation experiments can be found in the supplemental material.
![]() |
![]() |
![]() |
4.1. General Sampling
We first assume that no additional information of a scene except for its size (and thus near and far plane) is available and samples are placed only based on the camera location.
Equidistant Sampling
Following the original nerf framework, the default strategy is to place samples with equal distance between the near and far plane. We normalize sample positions to before positional encoding, placing the view cell center in the origin and dividing by (the maximum sample distance from the view cell center).
Non-linear Sampling
Since large depth ranges are unsuitable for equidistant sampling in world space, we also investigate non-linear sampling as initially proposed by Mildenhall et al. (2020). However, instead of exploiting normalized device coordinates and sampling in inverse depth, we use a logarithmic non-linearity to place the samples
where is the distance to the view cell center, and is the non-linearly transformed sample distance. As before, we normalize by before positional encoding.
Space Warping
Early training results for non-linear sampling show that the background typically contains high frequencies that must be dampened by the network while the foreground requires those to achieve high detail. This is not surprising, as real cameras and graphics techniques such as mip-mapping and anti-aliasing average background details as well.
To remedy this issue, we propose a warping of the 3D space towards a view cell-centric space. Starting from the center of the supported view cell, we warp the entire space using a radial distortion, bringing the background closer to the view cell:
We transform every ray sample, i.e., we place samples in a view cell-centric space and transform them before positional encoding. While this transformation warps ray samples such that they do not follow a straight line, sample locations in space stay consistent, i.e., samples from different view points landing at the same 3D location are evaluated equally.
Initial experiments showed that using an inverse square root transform achieves the best results:
Note that the use of already leads to the desired normalization before feature encoding. See Figure 3 for a visualization of all three placement strategies.

equidistant |
![]() |
![]() |
![]() |
logarithmic |
![]() |
![]() |
![]() |
log+warp |
![]() |
![]() |
![]() |
Quantitative results are shown in Figure 4 and example views for Classroom in Figure 5. Image quality and number of samples follow an approximately linear relationship for all three techniques (PSNR is in logarithmic space). In all three cases, reducing the number of samples also significantly reduces image quality, and only high sample counts achieve satisfactory quality. However, logarithmic sampling significantly improves the quality and achieves better quality-speed trade-offs for lower sample counts than equidistant sampling. Sampling in a warped space further increases quality. Still, even with log+warp we can only reduce the number of samples approximately by half to achieve identical quality to equidistant sampling.
4.2. Local Sampling
It can be observed that fully-trained nerf with large sample counts spend most of their samples in empty space. Given a ground truth for a surface representation, e.g., a depth texture for each view describing the distance of the first opaque surface hit by each ray, it is possible to take a fraction of the outputs of a trained nerf around the surface depth and still get a result of mostly equal quality.
This inspires the following question: Given such a ground truth depth texture to place samples, what is the best quality-speed trade-off that can possibly be reached? To investigate this, we test all three previous methods of sample placement (equidistant, logarithmic, log+warp) locally around the ground truth depth, keeping the step size identical for all sample counts. This should enable the network to only focus on the non-empty regions, without spending network capacity on empty space. The results of this study are shown in Figure 6 and 7.
![]() (equidistant) |
![]() (logarithmic) |
![]() (log+warp) |

These results show that, given an ideal sampling oracle, significant gains in quality-speed trade-offs can be achieved when rendering and training raymarching-based neural representations. When comparing the equidistant local sampling to the logarithmic and log+warp sampling, our results suggest that non-linear sampling with additional non-linear space warping towards a view cell achieves the best results overall.
In any case, relying on localized sampling guided by an oracle has multiple advantages: First, raymarching can be sped up significantly, as few samples are sufficient to generate high quality images, see Figure 1
. Second, supervised learning for localized samples reduces training time significantly, as the network does not need to learn about empty space. Third, the network can spend its capacity on important samples that contribute to the final image, and does not need to waste capacity for empty space.
However, in practice, the ground truth depth is often not available during inference for neural rendering due to memory or computational constraints—large-scale scenes require a significant amount of storage to represent via geometry or individual depth maps. Thus local sampling from ground truth depth can be considered a niche scenario, and we therefore target an efficiently compressed sampling representation via an mlp-based sampling oracle that only uses ground truth depth during training.
5. Sampling Oracle Network
In Section 4, we showcased potential improvements to the distribution of ray samples during training and rendering. Locally sampling around a known ground truth surface representation is the most promising. However, surface depth for novel views is generally unknown during inference, unless all potential views in a given view cell are densely captured or a surface representation is available for rendering—both defeating the purpose of having a compact neural representation.
Therefore, we introduce an oracle network to predict ideal sampling locations for a second raymarched shading network. This oracle network takes a ray as input and provides information about sample locations along that ray. Per-ray depth estimation in a view cell is non-trivial, if depth estimates must be consistent across views, especially around depth discontinuities. We found that using an mlp of the same size as the shading network generates sufficiently consistent depth estimates for the majority of rays in simple scenes. However, accurate estimates around depth discontinuities remain difficult to predict, as shown in Figure 8, leading to significant visual artifacts.
|
|
|

To mitigate this issue, we make the following observation: in general, the exact geometric structure of the scene is faithfully reconstructed using neural raymarching, where samples are also placed in empty space. Thus, we can give the oracle network more freedom; while it must predict sample locations around the hit surface, it can also provide additional estimates. Consider two neighboring rays around a depth discontinuity, one hitting the foreground object and one the background object: Representing this discontinuity accurately in ray space is difficult, as small changes in ray origin or direction may alternate between fore- and background. However, if for both the background and the foreground rays the network is allowed to return the same result, i.e., sampling at the foreground and the background depth, the oracle task is easier to learn.
5.1. Classified Depth
Interestingly, a straightforward simultaneous prediction of multiple real-valued depth outputs did not improve results compared to a single depth estimate per pixel in our experiments. Alternatively, the oracle can output sample likelihoods along the ray—i.e., a likelihood that a sample at a certain location will increase image quality. Unlike nerf’s refinement network, we want to avoid raymarching the oracle network and instead want to evaluate it only once. Moreover, we also want to significantly reduce the number of samples of the shading network . To this end, the oracle is trained via classification, where each class corresponds to a discretized segment along the ray. For every discrete ray segment, a high value indicates that it should receive (multiple) samples; a low value indicates that the segment can be skipped.
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
Discretizing the ground truth depth value of each ray leads to a one-hot encoding as target that can be trained using the common binary cross-entropy (BCE) loss, see Figure
9.To further aid the depth oracle in predicting consistent outputs at depth discontinuities, we provide a multi-class target that is filtered in image-space and across depth. To this end, we blur depth values from neighboring pixels in the ground truth target. To generate this filtered target, we use a radial (Euclidean distance) filter to include values of neighboring rays with a lower contribution:
(1) | ||||
where is the target for a ray going through the pixel and quantization step , is the number of quantization steps along the ray, is the filter size, and is the depth value at pixel . For example, using a filter, rays at a distance of pixels contribute to the output, respectively.
For multiple depth values with the same discretization result, we only take the maximum result (contribution from the closest ray) to ensure a valid classification target as well as that the highest value is placed at the actual surface.

From a classification point of view, filtering increases the number of false positives compared to the target surface depth. However, for the subsequent raymarching step, it effectively decreases the overall number of false negatives by ensuring that important regions of interest around the surface depth are not missed. This becomes apparent when considering that rays exist in a continuous space, i.e., the network does not need to learn hard boundaries between rays at depth discontinuities. Translating a higher false positive rate to raymarching, we increase the likelihoods for regions that do not need any samples. Thus, overall, we need to place more samples along rays to hit the “right” sample location, while at the same time reducing the chance to miss surfaces.
Given that the oracle network serves a similar purpose as the coarse network in nerf with the same capacity, while being evaluated only once instead of times, allowing the network to move towards wrong positive classifications is essential. Additionally, false positives are handled by the local raymarching network, as empty space will still result in no image contribution, even at low sample counts. A missed surface on the other hand would clearly reduce image quality.
Following the idea of label smoothing, we also filter along the depth axis, using a simple 1D triangle filter with kernel size :
(2) |
where describes the discretized input depth segment at position after filtering in image-space, is the filtered result along the depth dimension and is the filtered depth after clamping it between 0 and 1.
Figure 11 shows an example output for a fully filtered target. This filter increases the positively classified cells further, enabling the oracle network to more conservatively learn the depth of surfaces.
According to our observations, for scenes with fine structures and large view cells, both smoothing steps are essential. See Figure 10 for the depth oracle target output for different filtering techniques.

), we treat the classified depth output as a piecewise-constant PDF (blue) and translate it into a cumulative distribution function (CDF, green). Sampling the resulting CDF at equidistant steps (orange), concentrates the samples (red
) around the ray regions with a large classified depth output.Finally, to translate the classification values to sample locations, we use the same approach as nerf when going from the coarse to the fine network by sampling the inverse transform of the piecewise-constant probability density function (PDF)
(Mildenhall et al., 2020); see Figure 12.
![]() |
![]() |
5.2. Ray Unification and Oracle Input
To take the burden of disambiguating between rays originating from different starting locations from the oracle network, we map every ray starting location onto a sphere circumscribed around the view cell, see Figure 13.
Following the intuition of asking the oracle network to provide multiple (discretized) outputs along the ray, we additionally supply the same number of 3D positions regularly sampled along the ray—the centers of the discretized depth ranges for depth classification. These 3D positions are provided at once without increasing the network capacity, therefore keeping the evaluation cost similar. We experimented with higher dimensional positional and Fourier encoding for the input parameters, but did not reliably increase output quality. This may be due to sheer number of input parameters, which makes it more difficult for the network to learn.
As a logarithmic transformation on the sample positions along the ray increases quality for raymarching-based neural representations (Section 4.1), we use the same transformation on the classification steps, using smaller steps close to the camera and larger steps in the background.
5.3. Optimal Filter Sizes and Depth Oracle Inputs
To evaluate the optimal filter sizes for our neighborhood pixel filter (Equation 1), our depth label smoothing filter (Equation 2) and the number of 3D input positions along the ray (), we conduct an additional ablation study across a subset of those three parameters, using the classified depth oracle network as the input for a locally raymarched shading network, similar to the experiment in Section 4.2.
We vary the number of samples per ray between samples and test on the Bulldozer, Forest, Classroom and San Miguel scenes (Section 6.1), evaluating the overall quality on the test sets. Both the depth oracle network and the shading network contain hidden layers with hidden units each. We first train the depth oracle network for iterations, before training the shading network for iterations using the predictions from the depth oracle. Additionally, to illustrate the importance of our depth classification, we compare against a combination of a simpler depth oracle which only predicts a single depth value to guide the shading network with (SD unified) and without (SD) ray unification. Our final metric is the resulting quality of the RGB output of the shading network, which should improve when given better sampling positions by the depth oracle.
The results shown in Table 1 indicate that filtering both along the image dimensions and along depth leads to improvements in PSNR, without additional inference cost, even for very low sample counts. Furthermore, including additional sample positions along the ray seems to improve quality even further. For additional per-scene results, please refer to the supplemental material.
PSNR | FLIP | |||||||
---|---|---|---|---|---|---|---|---|
Method | N = 2 | N = 4 | N = 8 | N = 16 | N = 2 | N = 4 | N = 8 | N = 16 |
SD | 23.970 | 24.449 | 25.260 | 26.182 | 0.135 | 0.126 | 0.114 | 0.104 |
SD unified | 25.437 | 25.910 | 26.638 | 27.326 | 0.113 | 0.107 | 0.101 | 0.094 |
K-1 Z-1 I-1 | 26.525 | 27.685 | 28.578 | 29.722 | 0.101 | 0.092 | 0.086 | 0.081 |
K-5 Z-1 I-1 | 27.086 | 28.470 | 29.462 | 30.771 | 0.098 | 0.087 | 0.082 | 0.075 |
K-5 Z-1 I-128 | 27.893 | 29.247 | 30.631 | 31.303 | 0.089 | 0.084 | 0.077 | 0.073 |
K-5 Z-5 I-128 | 28.198 | 29.670 | 30.957 | 31.589 | 0.087 | 0.080 | 0.074 | 0.072 |
K-9 Z-1 I-1 | 27.198 | 28.698 | 29.992 | 31.105 | 0.096 | 0.084 | 0.085 | 0.075 |
K-9 Z-1 I-128 | 27.747 | 29.382 | 30.937 | 31.574 | 0.091 | 0.083 | 0.074 | 0.072 |
K-9 Z-9 I-128 | 27.501 | 29.118 | 30.363 | 31.510 | 0.095 | 0.084 | 0.077 | 0.072 |
Bulldozer | Forest | Classroom | San Miguel | Pavillon | |||||||
---|---|---|---|---|---|---|---|---|---|---|---|
Method | Average MFLOP per pixel | PSNR | FLIP | PSNR | FLIP | PSNR | FLIP | PSNR | FLIP | PSNR | FLIP |
DONeRF (2 samples) | 2.703 | 29.161 | 0.075 | 26.946 | 0.108 | 30.749 | 0.070 | 25.936 | 0.097 | 30.541 | 0.103 |
DONeRF (4 samples) | 4.355 | 31.502 | 0.062 | 28.029 | 0.103 | 32.244 | 0.067 | 26.903 | 0.088 | 31.032 | 0.100 |
DONeRF (8 samples) | 7.659 | 33.046 | 0.057 | 29.667 | 0.094 | 33.482 | 0.060 | 27.632 | 0.086 | 31.370 | 0.098 |
DONeRF (16 samples) | 14.285 | 33.960 | 0.055 | 29.897 | 0.093 | 34.348 | 0.058 | 28.150 | 0.084 | 31.574 | 0.097 |
log+warp nerf (64+128) | 211.422 | 33.341 | 0.059 | 26.026 | 0.135 | 32.036 | 0.073 | 27.041 | 0.099 | 31.754 | 0.099 |
|
|
|
|
|
6. Evaluation
We compare DONeRF against the baseline of nerf (Mildenhall et al., 2020). We mainly analyze the ability to extract novel high-quality views from neural representations of generated content, given challenging scenarios such as high-frequency geometry and large depth ranges. We evaluate this by computing the PSNR and FLIP (Andersson et al., 2020) values against ground truth renderings. Please refer to Section 4 and Section 5 for ablation studies of the main components of DONeRF, and please see the supplemental material for more detailed results per scene.
For our prototype real-time implementation of DONeRF we use a combination of TensorRT and CUDA. Ray unification, space warping, feature generation and raymarching runs in custom CUDA kernels with one thread per ray. Both networks are evaluated using TensorRT in half floating point precision. Thus, all parts still offer significant optimization potential, by, e.g., using multiple threads during input feature generation or reducing the bit length for the network evaluation further.
Nevertheless, we already achieve interactive frame rates at medium resolutions on an NVIDIA RTX 3090. Ray unification and first feature generation, the oracle network, space warping and the second feature generation, the shading network, and final color generation take , , , , and respectively for a image and samples per ray, i.e., about fps. Note that the shading network still is the major cost in our setup and that computation times increase nearly linearly with the number of samples: , , and for , , and samples, respectively.
6.1. Datasets
We collected a set of scenes that showcase both fine, high-frequency details as well as large depth ranges to validate that DONeRF is applicable for a wide variety of scenes. All datasets are rendered with Blender 111https://www.blender.org/, using their Cycles path tracing engine to render images for each scene (taking approximately 20 minutes per image), which we split into train/validation/test sets at a ratio. Examples of the ground truth images using the same views as the teaser (Figure 1) are shown in the supplemental material. For each scene, a view cell has been defined that showcases a wide range of movement and rotation to uncover potential issues with disocclusions and challenging geometry, while not intersecting geometry within the scene.
Bulldozer 222by ”Heinzelnisse“ https://www.blendswap.com/blend/11490 [view cell size: ] shows a close view of a toy bulldozer made of building bricks with lots of fine, high-frequency details and mostly diffuse shading.
Forest 333by Robin Tran https://cloud.blender.org/p/gallery/5fbd186ec57d586577c57417 [view cell size: ] shows a vast field of cel-shaded foliage and trees, with a person in the foreground looking into the distance. This dataset enforces a good foreground and background representation, and the foliage provides challenging high-frequency geometric content.
Classroom 444by Christophe Seux https://download.blender.org/demo/test/classroom.zip [view cell size: ] provides a challenging indoors scenario, with thin geometric detail such as the chairs’ legs, as well as very detailed texture work.
San Miguel 555by Guillermo M. Leal Llaguno https://casual-effects.com/g3d/data10/index.html [view cell size: ] showcases both difficult high-frequency transparent materials (the leaves of the trees) as well as complex disocclusions caused by the large pillars, and provides a test on a proven computer graphics test scene.
Pavillon 666by ”Hamza Cheggour / eMirage“ https://download.blender.org/demo/test/pabellon_barcelona_v1.scene_.zip [view cell size: ] contains complex view-dependent effects such as reflection and refraction in the pool as well as completely transparent objects, testing the ability of each neural representation to correctly reproduce those effects.
6.2. Evaluation Setup
For each scene, we compare our best performing DONeRF, using a neighborhood filter size of , a depth smoothing filter size of and unified inputs to the depth oracle network (Section 5.3), against nerf (Mildenhall et al., 2020), using their coarse+fine sampling approach at samples. As discussed in Section 4, we transform each sample logarithmically, and warp it towards the view cell. We train each network for iterations and use the checkpoint with the lowest validation loss for testing. As our optimizer, we use Adam (Kingma and Ba, 2015) with a learning rate of . We use a varying number of samples per ray depending on available GPU memory for each network, typically around samples for images per batch. For RGB outputs, we use standard MSE loss, while DONeRF’s depth classification uses the binary cross entropy loss. DONeRF’s depth oracle network is pretrained for iterations on the ground truth depth, after which the weights are locked and the shading network is trained for iterations. All images are downsampled to a resolution of to speed up training. All mlp contain layers with hidden units each, and all inputs to the shading network are encoded using positional encoding with positional and directional frequencies, except for San Miguel and Pavillon, where we use positional and directional encoding frequencies.
6.3. Results
The results of our evaluation are summarized in Table 2, and qualitative example views are shown in Figure 14. Compared to the nerf baseline, DONeRF achieves better quality at significantly higher performance across all our test scenes. While DONeRF can still benefit from additional samples along each ray, the local sampling strategy reaches diminishing returns much earlier, and we demonstrate identical quality compared to nerf already at 2 or 4 samples along the ray, using / fewer FLOP per pixel, respectively.
We also show a qualitative comparison between nerf, DONeRF and a simplified version of DONeRF that only uses a single depth value for sample placement in Figure 15, Figure 16 and Figure 17.
Especially for Classroom with its small geometric features around all chairs and high frequency shadow details the results show that our depth oracle can also handle complex geometric relations. San Miguel forms another challenging use case where our depth oracle and local sampling faithfully reconstruct fine geometric structures. Furthermore, especially the regions around masked materials and half transparent foliage can be difficult when significantly reducing sample counts, as both the foreground and background need to be sampled. However, we can see potential for improvement when looking at Pavillon—the transparent surfaces of the water and glass windows pose an issue for our depth estimation, as the ground truth depth only contains a single depth value for large surface regions.
(4 samples) |
(64 + 128 samples) |
(4 samples) |
(4 samples) |
(64 + 128 samples) |
(4 samples) |
(4 samples) |
(64 + 128 samples) |
(4 samples) |
7. Conclusion
Starting from an evaluation of the sampling strategies of nerf, we showed that sampling should be done both non-linearly and warped towards a view cell. Using a given ground truth depth map for optimal sample placement, we showed that local sampling achieves equal quality with as few as 2 samples per ray compared to a fully sampled network with samples. From this insight, we proposed our classified depth oracle network which discretizes the space along each ray, and spatially filters the target across x, y and depth to further improve the sample placement for difficult geometric scenarios. By using this classified depth oracle network to guide sample placement for a raymarched shading network, our DONeRF approach achieves equal or better quality compared to nerf across all scenes, while using as little as – samples per pixel (instead of 64 + 128 samples). This shows that we can reliably predict good sample positions even for complex and large-scale scenes, where nerf struggles.
7.1. Limitations and Future Work
Due to the scope, we only focused on static synthetic scenes in our evaluation. Additionally, partially transparent objects in scenes like San Miguel and Pavillon, such as the water or the glass windows, pose an issue, as only a single ground truth depth value was used for training the sampling oracle network. This leads to samples only being placed around the initial depth value. Fortunately, due to being a classification network, it would be straightforward to extend the training target of the depth oracle network with multiple ground truth surface points to correctly sample partially transparent objects. Furthermore, related research has already shown that depth-aware losses can be introduced to achieve more consistency in dynamic videos (Xian et al., 2020; Li et al., 2020) as well as to initialize a voxel representation to constrain the space of potentially visible voxels (Liu et al., 2020). Our classified depth sampling strategy could be adapted as a variation of these ideas, allowing for more consistency across dynamic content.
Another open point for investigation is the accuracy of sample placement when ground truth depth is not available, e.g., for real-world content. Our filtered depth discretization should already provide a good starting point for depth estimates that are not fully accurate, as long as the estimated 3D surface positions are consistent across views. To our knowledge, DONeRF is the first reliable method to render from a raymarched neural scene representation at interactive frame rates, and opens the door for high-quality dynamic renderings in real-time. We are confident that such a local sampling strategy will be essential for real-time neural rendering going forward.
References
- (1)
- Andersson et al. (2020) Pontus Andersson, Jim Nilsson, Tomas Akenine-Möller, Magnus Oskarsson, Kalle Åström, and Mark D. Fairchild. 2020. FLIP: A Difference Evaluator for Alternating Images. Proc. ACM Comput. Graph. Interact. Tech. 3, 2, Article 15 (Aug. 2020), 23 pages. https://doi.org/10.1145/3406183
- Bemana et al. (2020) Mojtaba Bemana, Karol Myszkowski, Hans-Peter Seidel, and Tobias Ritschel. 2020. X-Fields: Implicit Neural View-, Light-and Time-Image Interpolation. arXiv preprint arXiv:2010.00450 (2020).
- Bi et al. (2020) Sai Bi, Zexiang Xu, Pratul Srinivasan, Ben Mildenhall, Kalyan Sunkavalli, Miloš Hašan, Yannick Hold-Geoffroy, David Kriegman, and Ravi Ramamoorthi. 2020. Neural Reflectance Fields for Appearance Acquisition. (2020). arXiv:cs.CV/2008.03824
- Boss et al. (2020) Mark Boss, Raphael Braun, Varun Jampani, Jonathan T. Barron, Ce Liu, and Hendrik P. A. Lensch. 2020. NeRD: Neural Reflectance Decomposition from Image Collections. (2020). arXiv:cs.CV/2012.03918
- Broxton et al. (2020) Michael Broxton, John Flynn, Ryan Overbeck, Daniel Erickson, Peter Hedman, Matthew DuVall, Jason Dourgarian, Jay Busch, Matt Whalen, and Paul Debevec. 2020. Immersive Light Field Video with a Layered Mesh Representation. 39, 4 (2020), 86:1–86:15.
-
Chan et al. (2020)
Eric Chan, Marco
Monteiro, Petr Kellnhofer, Jiajun Wu,
and Gordon Wetzstein. 2020.
pi-GAN: Periodic Implicit Generative Adversarial Networks for 3D-Aware Image Synthesis. In
arXiv. - Dellaert and Yen-Chen (2021) Frank Dellaert and Lin Yen-Chen. 2021. Neural Volume Rendering: NeRF And Beyond. arXiv:cs.CV/2101.05204
- Du et al. (2020) Yilun Du, Yinan Zhang, Hong-Xing Yu, Joshua B. Tenenbaum, and Jiajun Wu. 2020. Neural Radiance Flow for 4D View Synthesis and Video Processing. arXiv preprint arXiv:2012.09790 (2020).
-
Flynn et al. (2019)
J. Flynn, M. Broxton,
P. Debevec, M. DuVall,
G. Fyffe, R. Overbeck,
N. Snavely, and R. Tucker.
2019.
DeepView: View Synthesis With Learned Gradient
Descent. In
2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
. 2362–2371. https://doi.org/10.1109/CVPR.2019.00247 - Gafni et al. (2020) Guy Gafni, Justus Thies, Michael Zollhöfer, and Matthias Nießner. 2020. Dynamic Neural Radiance Fields for Monocular 4D Facial Avatar Reconstruction. (2020). arXiv:cs.CV/2012.03065
- Gao et al. (2020) Chen Gao, Yichang Shih, Wei-Sheng Lai, Chia-Kai Liang, and Jia-Bin Huang. 2020. Portrait Neural Radiance Fields from a Single Image. arXiv preprint arXiv:2012.05903 (2020).
- Hedman et al. (2018) Peter Hedman, Julien Philip, True Price, Jan-Michael Frahm, George Drettakis, and Gabriel Brostow. 2018. Deep Blending for Free-viewpoint Image-based Rendering. 37, 6 (2018), 257:1–257:15.
- Kingma and Ba (2015) Diederik P. Kingma and Jimmy Ba. 2015. Adam: A Method for Stochastic Optimization. In ICLR (Poster). http://arxiv.org/abs/1412.6980
- Li et al. (2020) Zhengqi Li, Simon Niklaus, Noah Snavely, and Oliver Wang. 2020. Neural Scene Flow Fields for Space-Time View Synthesis of Dynamic Scenes. arXiv preprint arXiv:2011.13084 (2020).
- Lindell et al. (2020) David B Lindell, Julien NP Martel, and Gordon Wetzstein. 2020. AutoInt: Automatic Integration for Fast Neural Volume Rendering. arXiv preprint arXiv:2012.01714 (2020).
- Liu et al. (2020) Lingjie Liu, Jiatao Gu, Kyaw Zaw Lin, Tat-Seng Chua, and Christian Theobalt. 2020. Neural sparse voxel fields. Advances in Neural Information Processing Systems 33 (2020).
- Lombardi et al. (2019) Stephen Lombardi, Tomas Simon, Jason Saragih, Gabriel Schwartz, Andreas Lehrmann, and Yaser Sheikh. 2019. Neural Volumes: Learning Dynamic Renderable Volumes from Images. ACM Trans. Graph. 38, 4, Article 65 (July 2019), 14 pages.
- Martin-Brualla et al. (2020) Ricardo Martin-Brualla, Noha Radwan, Mehdi S. M. Sajjadi, Jonathan T. Barron, Alexey Dosovitskiy, and Daniel Duckworth. 2020. NeRF in the Wild: Neural Radiance Fields for Unconstrained Photo Collections. (2020).
- Mildenhall et al. (2019) Ben Mildenhall, Pratul P. Srinivasan, Rodrigo Ortiz-Cayon, Nima Khademi Kalantari, Ravi Ramamoorthi, Ren Ng, and Abhishek Kar. 2019. Local Light Field Fusion: Practical View Synthesis with Prescriptive Sampling Guidelines. ACM Transactions on Graphics (TOG) (2019).
- Mildenhall et al. (2020) Ben Mildenhall, Pratul P. Srinivasan, Matthew Tancik, Jonathan T. Barron, Ravi Ramamoorthi, and Ren Ng. 2020. NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis. In ECCV.
- Niemeyer and Geiger (2020) Michael Niemeyer and Andreas Geiger. 2020. GIRAFFE: Representing Scenes as Compositional Generative Neural Feature Fields. arXiv:cs.CV/2011.12100
- Ost et al. (2020) Julian Ost, Fahim Mannan, Nils Thuerey, Julian Knodt, and Felix Heide. 2020. Neural Scene Graphs for Dynamic Scenes. arXiv:cs.CV/2011.10379
- Park et al. (2019) J. J. Park, P. Florence, J. Straub, R. Newcombe, and S. Lovegrove. 2019. DeepSDF: Learning Continuous Signed Distance Functions for Shape Representation. In 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 165–174. https://doi.org/10.1109/CVPR.2019.00025
- Park et al. (2020) Keunhong Park, Utkarsh Sinha, Jonathan T. Barron, Sofien Bouaziz, Dan B Goldman, Steven M. Seitz, and Ricardo Martin-Brualla. 2020. Deformable Neural Radiance Fields. arXiv preprint arXiv:2011.12948 (2020).
- Pumarola et al. (2020) Albert Pumarola, Enric Corona, Gerard Pons-Moll, and Francesc Moreno-Noguer. 2020. D-NeRF: Neural Radiance Fields for Dynamic Scenes. arXiv preprint arXiv:2011.13961 (2020).
- Rebain et al. (2020) Daniel Rebain, Wei Jiang, Soroosh Yazdani, Ke Li, Kwang Moo Yi, and Andrea Tagliasacchi. 2020. DeRF: Decomposed Radiance Fields. arXiv e-prints, Article arXiv:2011.12490 (Nov. 2020), arXiv:2011.12490 pages. arXiv:cs.CV/2011.12490
- Rebain et al. (2020) Daniel Rebain, Wei Jiang, Soroosh Yazdani, Ke Li, Kwang Moo Yi, and Andrea Tagliasacchi. 2020. DeRF: Decomposed Radiance Fields. arXiv:cs.CV/2011.12490
- Schwarz et al. (2020) Katja Schwarz, Yiyi Liao, Michael Niemeyer, and Andreas Geiger. 2020. GRAF: Generative Radiance Fields for 3D-Aware Image Synthesis. In Advances in Neural Information Processing Systems (NeurIPS).
- Shum and Kang (2000) Harry Shum and Sing Bing Kang. 2000. Review of image-based rendering techniques. In Visual Communications and Image Processing 2000, King N. Ngan, Thomas Sikora, and Ming-Ting Sun (Eds.), Vol. 4067. International Society for Optics and Photonics, SPIE, 2 – 13. https://doi.org/10.1117/12.386541
- Sitzmann et al. (2020) Vincent Sitzmann, Julien N.P. Martel, Alexander W. Bergman, David B. Lindell, and Gordon Wetzstein. 2020. Implicit Neural Representations with Periodic Activation Functions. In Proc. NeurIPS.
- Sitzmann et al. (2019a) Vincent Sitzmann, Justus Thies, Felix Heide, Matthias Nießner, Gordon Wetzstein, and Michael Zollhofer. 2019a. Deepvoxels: Learning persistent 3d feature embeddings. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2437–2446.
- Sitzmann et al. (2019b) Vincent Sitzmann, Michael Zollhöfer, and Gordon Wetzstein. 2019b. Scene representation networks: Continuous 3d-structure-aware neural scene representations. In Advances in Neural Information Processing Systems. 1121–1132.
- Srinivasan et al. (2020) Pratul P. Srinivasan, Boyang Deng, Xiuming Zhang, Matthew Tancik, Ben Mildenhall, and Jonathan T. Barron. 2020. NeRV: Neural Reflectance and Visibility Fields for Relighting and View Synthesis. (2020).
- Srinivasan et al. (2019) P. P. Srinivasan, R. Tucker, J. T. Barron, R. Ramamoorthi, R. Ng, and N. Snavely. 2019. Pushing the Boundaries of View Extrapolation With Multiplane Images. In 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 175–184. https://doi.org/10.1109/CVPR.2019.00026
- Tancik et al. (2020) Matthew Tancik, Pratul P. Srinivasan, Ben Mildenhall, Sara Fridovich-Keil, Nithin Raghavan, Utkarsh Singhal, Ravi Ramamoorthi, Jonathan T. Barron, and Ren Ng. 2020. Fourier Features Let Networks Learn High Frequency Functions in Low Dimensional Domains. NeurIPS (2020).
- Trevithick and Yang (2020) Alex Trevithick and Bo Yang. 2020. GRF: Learning a General Radiance Field for 3D Scene Representation and Rendering. In arXiv:2010.04595.
- Xian et al. (2020) Wenqi Xian, Jia-Bin Huang, Johannes Kopf, and Changil Kim. 2020. Space-time Neural Irradiance Fields for Free-Viewpoint Video. arXiv preprint arXiv:2011.12950 (2020).
- Yu et al. (2020) Alex Yu, Vickie Ye, Matthew Tancik, and Angjoo Kanazawa. 2020. pixelNeRF: Neural Radiance Fields from One or Few Images. arXiv:cs.CV/2012.02190
- Yuan et al. (2021) Wentao Yuan, Zhaoyang Lv, Tanner Schmidt, and Steven Lovegrove. 2021. STaR: Self-supervised Tracking and Reconstruction of Rigid Objects in Motion with Neural Rendering. arXiv preprint arXiv:2101.01602 (2021).
- Zhang and Chen (2004) Cha Zhang and Tsuhan Chen. 2004. A survey on image-based rendering—representation, sampling and compression. Signal Processing: Image Communication 19, 1 (2004), 1 – 28. https://doi.org/10.1016/j.image.2003.07.001
- Zhou et al. (2018) Tinghui Zhou, Richard Tucker, John Flynn, Graham Fyffe, and Noah Snavely. 2018. Stereo Magnification: Learning View Synthesis Using Multiplane Images. ACM Trans. Graph. 37, 4, Article 65 (July 2018), 12 pages. https://doi.org/10.1145/3197517.3201323