Log In Sign Up

D-NeRF: Neural Radiance Fields for Dynamic Scenes

by   Albert Pumarola, et al.

Neural rendering techniques combining machine learning with geometric reasoning have arisen as one of the most promising approaches for synthesizing novel views of a scene from a sparse set of images. Among these, stands out the Neural radiance fields (NeRF), which trains a deep network to map 5D input coordinates (representing spatial location and viewing direction) into a volume density and view-dependent emitted radiance. However, despite achieving an unprecedented level of photorealism on the generated images, NeRF is only applicable to static scenes, where the same spatial location can be queried from different images. In this paper we introduce D-NeRF, a method that extends neural radiance fields to a dynamic domain, allowing to reconstruct and render novel images of objects under rigid and non-rigid motions from a single camera moving around the scene. For this purpose we consider time as an additional input to the system, and split the learning process in two main stages: one that encodes the scene into a canonical space and another that maps this canonical representation into the deformed scene at a particular time. Both mappings are simultaneously learned using fully-connected networks. Once the networks are trained, D-NeRF can render novel images, controlling both the camera view and the time variable, and thus, the object movement. We demonstrate the effectiveness of our approach on scenes with objects under rigid, articulated and non-rigid motions. Code, model weights and the dynamic scenes dataset will be released.


page 1

page 4

page 6

page 7

page 8


NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis

We present a method that achieves state-of-the-art results for synthesiz...

STaR: Self-supervised Tracking and Reconstruction of Rigid Objects in Motion with Neural Rendering

We present STaR, a novel method that performs Self-supervised Tracking a...

DeVRF: Fast Deformable Voxel Radiance Fields for Dynamic Scenes

Modeling dynamic scenes is important for many applications such as virtu...

Neural 3D Scene Compression via Model Compression

Rendering 3D scenes requires access to arbitrary viewpoints from the sce...

HyperNeRF: A Higher-Dimensional Representation for Topologically Varying Neural Radiance Fields

Neural Radiance Fields (NeRF) are able to reconstruct scenes with unprec...

Mip-NeRF 360: Unbounded Anti-Aliased Neural Radiance Fields

Though neural radiance fields (NeRF) have demonstrated impressive view s...

MoCo-Flow: Neural Motion Consensus Flow for Dynamic Humans in Stationary Monocular Cameras

Synthesizing novel views of dynamic humans from stationary monocular cam...

1 Introduction

Rendering novel photo-realistic views of a scene from a sparse set of input images is necessary for many applications in augmented reality, virtual reality, 3D content production, games and the movie industry. Recent advances in the emerging field of neural rendering, which learn scene representations encoding both geometry and appearance [mildenhall2020nerf, martin2020nerf, liu2020neural, yariv2020multiview, niemeyer2020differentiable, rematas2020neural], have achieved results that largely surpass those of traditional Structure-from-Motion [hartley2003multiple, triggs1999bundle, snavely2006photo], light-field photography [levoy1996light] and image-based rendering approaches [buehler2001unstructured]. For instance, the Neural Radiance Fields (NeRF) [mildenhall2020nerf]

have shown that simple multilayer perceptron networks can encode the mapping from 5D inputs (representing spatial locations

and camera views ) to emitted radiance values and volume density. This learned mapping allows then free-viewpoint rendering with extraordinary realism. Subsequent works have extended Neural Radiance Fields to images in the wild undergoing severe lighting changes [martin2020nerf] and have proposed sparse voxel fields for rapid inference [liu2020neural]. Similar schemes have also been recently used for multi-view surface reconstruction [yariv2020multiview] and learning surface light fields [oechsle2020learning].

Nevertheless, all these approaches assume a static scene without moving objects. In this paper we relax this assumption and propose, to the best of our knowledge, the first end-to-end neural rendering system that is applicable to dynamic scenes, made of both still and moving/deforming objects. While there exist approaches for 4D view synthesis [BansalCVPR2020], our approach is different in that: 1) we only require a single camera; 2) we do not need to pre-compute a 3D reconstruction; and 3) our approach can be trained end-to-end.

Our idea is to represent the input of our system with a continuous 6D function, which besides 3D location and camera view, it also considers the time component . Naively extending NeRF to learn a mapping from to density and radiance does not produce satisfying results, as the temporal redundancy in the scene is not effectively exploited. Our observation is that objects can move and deform, but typically do not appear or disappear. Inspired by classical 3D scene flow [vedula2005three], the core idea to build our method, denoted Dynamic-NeRF (D-NeRF in short), is to decompose learning in two modules. The first one learns a spatial mapping between each point of the scene at time and a canonical scene configuration. The second module regresses the scene radiance emitted in each direction and volume density given the tuple . Both mappings are learned with deep fully connected networks without convolutional layers. The learned model then allows to synthesize novel images, providing control in the continuum of the camera views and time component, or equivalently, the dynamic state of the scene (see Fig. LABEL:fig:teaser).

We thoroughly evaluate D-NeRF on scenes undergoing very different types of deformation, from articulated motion to humans performing complex body poses. We show that by decomposing learning into a canonical scene and scene flow D-NeRF is able to render high-quality images while controlling both camera view and time components. As a side-product, our method is also able to produce complete 3D meshes that capture the time-varying geometry and which remarkably are obtained by observing the scene under a specific deformation only from one single viewpoint.

Figure 1: Problem Definition.

Given a sparse set of images of a dynamic scene moving non-rigidly and being captured by a monocular camera, we aim to design a deep learning model to implicitly encode the scene and synthesize novel views at an arbitrary time. Here, we visualize a subset of the input training frames paired with accompanying camera parameters, and we show three novel views at three different time instances rendered by the proposed method.

2 Related work

Neural implicit representation for 3D geometry.

The success of deep learning on the 2D domain has spurred a growing interest in the 3D domain. Nevertheless, which is the most appropriate 3D data representation for deep learning remains an open question, especially for non-rigid geometry. Standard representations for rigid geometry include point-clouds [SuCVPR2017, pumarola2020c], voxels [GirdharECCV2016, YanNIPS2016] and octrees [WangSIGGRAPH2017, TatarchenkoICCV2017]

. Recently, there has been a strong burst in representing 3D data in an implicit manner via a neural network 

[mescheder2019occupancy, park2019deepsdf, chen2019learning, xu2019disn, chibane2020implicit, genova2020local]. The main idea behind this approach is to describe the information (occupancy, distance to surface, color, illumination) of a 3D point as the output of a neural network . Compared to the previously mentioned representations, neural implicit representations allow for continuous surface reconstruction at a low memory footprint.

The first works exploiting implicit representations [mescheder2019occupancy, park2019deepsdf, chen2019learning, xu2019disn] for 3D representation were limited by their requirement of having access to 3D ground-truth geometry, often expensive or even impossible to obtain for in the wild scenes. Subsequent works relaxed this requirement by introducing a differentiable render allowing 2D supervision. For instance, [liu2019learning] proposed an efficient ray-based field probing algorithm for efficient image-to-field supervision. [niemeyer2020differentiable, yariv2020universal] introduced an implicit-based method to calculate the exact derivative of a 3D occupancy field surface intersection with a camera ray. In [sitzmann2019scene]

, a recurrent neural network was used to ray-cast the scene and estimate the surface geometry. However, despite these techniques have a great potential to represent 3D shapes in an unsupervised manner, they are typically limited to relatively simple geometries.

NeRF [mildenhall2020nerf] showed that by implicitly representing a rigid scene using 5D radiance fields makes it possible to capture high-resolution geometry and photo-realistically rendering novel views. [martin2020nerf] extended this method to handle variable illumination and transient occlusions to deal with in the wild images. In [liu2020neural], even more complex 3D surfaces were represented by using voxel-bouded implicit fields. And [yariv2020multiview] circumvented the need of multiview camera calibration.

However, while all mentioned methods achieve impressive results on rigid scenes, none of them can deal with dynamic and deformable scenes. Occupancy flow [niemeyer2019occupancy]

was the first work to tackle non-rigid geometry by learning continuous vector field assigning a motion vector to every point in space and time, but it requires full 3D ground-truth supervision. Neural volumes 

[lombardi2019neural] produced high quality reconstruction results via an encoder-decoder voxel-based representation enhanced with an implicit voxel warp field, but they require a muti-view image capture setting.

To the best of our knowledge, D-NeRF is the first approach able to generate a neural implicit representation for non-rigid and time-varying scenes, trained solely on monocular data without the need of 3D ground-truth supervision nor a multi-view camera setting.

Novel view synthesis.

Novel view synthesis is a long standing vision and graphics problem that aims to synthesize new images from arbitrary view points of a scene captured by multiple images. Most traditional approaches for rigid scenes consist on reconstructing the scene from multiple views with Structure-from-Motion [hartley2003multiple] and bundle adjustment [triggs1999bundle], while other approaches propose light-field based photography [levoy1996light]. More recently, deep learning based techniques [shen2019patient, kar2017learning, flynn2019deepview, choi2019extreme, mildenhall2019llff] are able to learn a neural volumetric representation from a set of sparse images.

However, none of these methods can synthesize novel views of dynamic scenes. To tackle non-rigid scenes most methods approach the problem by reconstructing a dynamic 3D textured mesh. 3D reconstruction of non-rigid surfaces from monocular images is known to be severely ill-posed. Structure-from-Template (SfT) approaches [bartoli2015shape, chhatkuli2014stable, moreno2013pami] recover the surface geometry given a reference known template configuration. Temporal information is another prior typically exploited. Non-rigid-Structure-from-Motion (NRSfM) techniques [tomasi1992shape, agudo2015simultaneous] exploit temporal information. Yet, SfT and NRSfM require either 2D-to-3D matches or 2D point tracks, limiting their general applicability to relatively well-textured surfaces and mild deformations.

Some of these limitations are overcome by learning based techniques, which have been effectively used for synthesizing novel photo-realistic views of dynamic scenes. For instance, [BansalCVPR2020, zitnick2004high, jiang20123d] capture the dynamic scene at the same time instant from multiple views, to then generate 4D space-time visualizations. [flynn2016deepstereo, philip2018plane, zhou2018stereo] also leverage on simultaneously capturing the scene from multiple cameras to estimate depth, completing areas with missing information and then performing view synthesis. In [yoon2020novel]

, the need of multiple views is circumvented by using a pre-trained network that estimates a per frame depth. This depth, jointly with the optical flow and consistent depth estimation across frames, are then used to interpolate between images and render novel views. Nevertheless, by decoupling depth estimation from novel view synthesis, the outcome of this approach becomes highly dependent on the quality of the depth maps as well as on the reliability of the optical flow. Very recently, X-Fields 

[bemana2020x] introduced a neural network to interpolate between images taken across different view, time or illumination conditions. However, while this approach is able to process dynamic scenes, it requires more than one view. Since no 3D representation is learned, variation in viewpoint is small.

D-NeRF is different from all prior work in that it does not require 3D reconstruction, can be learned end-to-end, and requires a single view per time instance. Another appealing characteristic of D-NeRF is that it inherently learns a time-varying 3D volume density and emitted radiance, which turns the novel view synthesis into a ray-casting process instead of a view interpolation, which is remarkably more robust to rendering images from arbitrary viewpoints.

Figure 2: D-NeRF Model. The proposed architecture consists of two main blocks: a deformation network mapping all scene deformations to a common canonical configuration; and a canonical network regressing volume density and view-dependent RGB color from every camera ray.

3 Problem Formulation

Given a sparse set of images of a dynamic scene captured with a monocular camera, we aim to design a deep learning model able to implicitly encode the scene and synthesize novel views at an arbitrary time (see Fig. 1).

Formally, our goal is to learn a mapping that, given a 3D point , outputs its emitted color and volume density conditioned on a time instant and view direction . That is, we seek to estimate the mapping .

An intuitive solution would be to directly learn the transformation from the 6D space to the 4D space . However, as we will show in the results section, we obtain consistently better results by splitting the mapping into and , where represents the scene in canonical configuration and a mapping between the scene at time instant and the canonical one. More precisely, given a point and viewing direction at time instant we first transform the point position to its canonical configuration as . Without loss of generality, we chose as the canonical scene . By doing so the scene is no longer independent between time instances, and becomes interconnected through a common canonical space anchor. Then, the assigned emitted color and volume density under viewing direction equal to those in the canonical configuration .

We propose to learn and using a sparse set of RGB images captured with a monocular camera, where denotes the image acquired under camera pose SE(3), at time . Although we could assume multiple views per time instance, we want to test the limits of our method, and assume a single image per time instance. That is, we do not observe the scene under a specific configuration/deformation state from different viewpoints.

4 Method

We now introduce D-NeRF, our novel neural renderer for view synthesis trained solely from a sparse set of images of a dynamic scene. We build on NeRF [mildenhall2020nerf] and generalize it to handle non-rigid scenes. Recall that NeRF requires multiple views of a rigid scene In contrast, D-NeRF can learn a volumetric density representation for continuous non-rigid scenes trained with a single view per time instant.

As shown in Fig. 2, D-NeRF consists of two main neural network modules, which parameterize the mappings explained in the previous section . On the one hand we have the Canonical Network

, an MLP (multilayer perceptron)

is trained to encode the scene in the canonical configuration such that given a 3D point and a view direction returns its emitted color and volume density . The second module is called Deformation Network and consists of another MLP which predicts a deformation field defining the transformation between the scene at time and the scene in its canonical configuration. We next describe in detail each one of these blocks (Sec. 4.1), their interconnection for volume rendering (Sec. 4.2) and how are they learned (Sec. 4.3).

4.1 Model Architecture

Canonical Network. With the use of a canonical configuration we seek to find a representation of the scene that brings together the information of all corresponding points in all images. By doing this, the missing information from a specific viewpoint can then be retrieved from that canonical configuration, which shall act as an anchor interconnecting all images.

The canonical network is trained so as to encode volumetric density and color of the scene in canonical configuration. Concretely, given the 3D coordinates of a point, we first encode it into a 256-dimensional feature vector. This feature vector is then concatenated with the camera viewing direction , and propagated through a fully connected layer to yield the emitted color and volume density for that given point in the canonical space.

Deformation Network. The deformation network is optimized to estimate the deformation field between the scene at a specific time instant and the scene in canonical space. Formally, given a 3D point at time , is trained to output the displacement that transforms the given point to its position in the canonical space as . For all experiments, without loss of generality, we set the canonical scene to be the scene at time :


As shown in previous works [rahaman2019spectral, vaswani2017attention, mildenhall2020nerf], directly feeding raw coordinates and angles to a neural network results in low performance. Thus, for both the canonical and the deformation networks, we first encode , and into a higher dimension space. We use the same positional encoder as in [mildenhall2020nerf] where . We independently apply the encoder to each coordinate and camera view component, using for , and for and .

4.2 Volume Rendering

We now adapt NeRF volume rendering equations to account for non-rigid deformations in the proposed 6D neural radiance field. Let be a point along the camera ray emitted from the center of projection to a pixel . Considering near and far bounds and in that ray, the expected color of the pixel at time is given by:


The 3D point denotes the point on the camera ray transformed to canonical space using our Deformation Network , and

is the accumulated probability that the ray emitted from

to does not hit any other particle. Notice that the density and color are predicted by our Canonical Network .

As in [mildenhall2020nerf] the volume rendering integrals in Eq. (2) and Eq. (5) can be approximated via numerical quadrature. To select a random set of quadrature points a stratified sampling strategy is applied by uniformly drawing samples from evenly-spaced ray bins. A pixel color is approximated as:


and is the distance between two quadrature points.

4.3 Learning the Model

The parameters of the canonical and deformation networks are simultaneously learned by minimizing the mean squared error with respect to the RGB images of the scene and their corresponding camera pose matrices . Recall that every time instant is only acquired by a single camera.

At each training batch, we first sample a random set of pixels corresponding to the rays cast from some camera position to some pixels of the corresponding RGB image . We then estimate the colors of the chosen pixels using Eq. (6). The training loss we use is the mean squared error between the rendered and real pixels:


where are the pixels’ ground-truth color.

5 Implementation Details

Both the canonical network and the deformation network

consists on simple 8-layers MLPs with ReLU activations. For the canonical network a final sigmoid non-linearity is applied to

and . No non-linearlity is applied to in the deformation network.

For all experiments we set the canonical configuration as the scene state at by enforcing it in Eq. (1). To improve the networks convergence, we sort the input images according to their time stamps (from lower to higher) and then we apply a curriculum learning strategy where we incrementally add images with higher time stamps.

The model is trained with images during iterations with a batch size of rays, each sampled times along the ray. As for the optimizer, we use Adam [kingma2014adam] with learning rate of , , and exponential decay to . The model is trained with a single Nvidia® GTX 1080 for 2 days.

Figure 3:

Visualization of the Learned Scene Representation.

Given a dynamic scene at a specific time instant, D-NeRF learns a displacement field that maps all points of the scene to a common canonical configuration. The volume density and view-dependent emitted radiance for this configuration is learned and transferred to the original input points to render novel views. This figure represents, from left to right: the learned radiance from a specific viewpoint, the volume density represented as a 3D mesh and a depth map, and the color-coded points of the canonical configuration mapped to the deformed meshes based on . The same colors on corresponding points indicate the correctness of such mapping.
Figure 4: Analyzing Shading Effects. Pairs of corresponding points between the canonical space and the scene at times and .

6 Experiments

This section provides a thorough evaluation of our system. We first test the main components of the model, namely the canonical and deformation networks (Sec. 6.1). We then compare D-NeRF against NeRF and T-NeRF, a variant in which does not use the canonical mapping (Sec. 6.2). Finally, we demonstrate D-NeRF ability to synthesize novel views at an arbitrary time in several complex dynamic scenes (Sec. 6.3).

In order to perform an exhaustive evaluation we have extended NeRF [mildenhall2020nerf] rigid benchmark with eight scenes containing dynamic objects under large deformations and realistic non-Lambertian materials. As in the rigid benchmark of [mildenhall2020nerf], six are rendered from viewpoints sampled from the upper hemisphere, and two are rendered from viewpoints sampled on the full sphere. Each scene contains between 100 and 200 rendered views depending on the action time span, all at 800 × 800 pixels. We will release the path-traced images with defined train/validation/test splits for these eight scenes.

6.1 Dissecting the Model

This subsection provides insights about D-NeRF behaviour when modeling a dynamic scene and analyze the two main modules, namely the canonical and deformation networks.

We initially evaluate the ability of the canonical network to represent the scene in a canonical configuration. The results of this analysis for two scenes are shown the first row of Fig. 3 (columns 1-3 in each case). The plots show, for the canonical configuration (), the RGB image, the 3D occupancy network and the depth map, respectively. The rendered RGB image is the result of evaluating the canonical network on rays cast from an arbitrary camera position applying Eq. (6). To better visualize the learned volumetric density we transform it into a mesh applying marching cubes [lorensen1987marching], with a 3D cube resolution of voxels. Note how D-NeRF is able to model fine geometric and appearance details for complex topologies and texture patterns, even when it was only trained with a set of sparse images, each under a different deformation.

In a second experiment we assess the capacity of the network to estimate consistent deformation fields that map the canonical scene to the particular shape at each input image. The second and third rows of Fig. 3 show the result of applying the corresponding translation vectors to the canonical space for and . The fourth column in each of the two examples visualizes the displacement field, where the color-coded points in the canonical shape () at mapped to the different shape configurations at and . Note that the colors are consistent along the time instants, indicating that the displacement field is correctly estimated.

Another question that we try to answer is how D-NeRF manages to model phenomena like shadows/shading effects, that is, how the model can encode changes of appearance of the same point along time. We have carried an additional experiment to answer this. In Fig. 4 we show a scene with three balls, made of very different materials (plastic –green–, translucent glass –blue– and metal –red–). The figure plots pairs of corresponding points between the canonical configuration and the scene at a specific time instant. D-NeRF is able to synthesize the shading effects by warping the canonical configuration. For instance, observe how the floor shadows are warped along time. Note that the points in the shadow of, the red ball, at and map at different regions of the canonical space.

Figure 5: Qualitative Comparison. Novel view synthesis results of dynamic scenes. For every scene we show an image synthesised from a novel view at an arbitrary time by our method, and three close-ups for: ground-truth, NeRF, T-NeRF, and D-NeRF (ours).
Hell Warrior Mutant Hook Bouncing Balls
NeRF 44e-3 13.52 0.81 0.25 9e-4 20.31 0.91 0.09 21e-3 16.65 0.84 0.19 1e-2 18.28 0.88 0.23
T-NeRF 47e-4 23.19 0.93 0.08 8e-4 30.56 0.96 0.04 18e-4 27.21 0.94 0.06 6e-4 32.01 0.97 0.04
D-NeRF 31e-4 25.02 0.95 0.06 7e-4 31.29 0.97 0.02 11e-4 29.25 0.96 0.11 5e-4 32.80 0.98 0.03
Lego T-Rex Stand Up Jumping Jacks
NeRF 9e-4 20.30 0.79 0.23 3e-3 24.49 0.93 0.13 1e-2 18.19 0.89 0.14 1e-2 18.28 0.88 0.23
T-NeRF 3e-4 23.82 0.90 0.15 9e-4 30.19 0.96 0.13 7e-4 31.24 0.97 0.02 6e-4 32.01 0.97 0.03
D-NeRF 6e-4 21.64 0.83 0.16 6e-4 31.75 0.97 0.03 5e-4 32.79 0.98 0.02 5e-4 32.80 0.98 0.03
Table 1: Quantitative Comparison. We report MSE/LPIPS (lower is better) and PSNR/SSIM (higher is better).

6.2 Quantitative Comparison

We next evaluate the quality of D-NeRF on the novel view synthesis problem and compare it against the original NeRF [mildenhall2020nerf], which represents the scene using a 5D input , and T-NeRF, a straight-forward extension of NeRF in which the scene is represented by a 6D input , without considering the intermediate canonical configuration of D-NeRF.

Table 1

summarizes the quantitative results on the 8 dynamic scenes of our dataset. We use several metrics for the evaluation: Mean Squared Error (MSE), Peak Signal-to-Noise Ratio (PSNR), Structural Similarity (SSIM) 

[wang2004image] and Learned Perceptual Image Patch Similarity (LPIPS) [zhang2018perceptual]. In Fig. 5 we show samples of the estimated images under a novel view for visual inspection. As expected, NeRF is not able to model the dynamics scenes as it was designed for rigid cases, and always converges to a blurry mean representation of all deformations. On the other hand, the T-NeRF baseline is able to capture reasonably well the dynamics, although is not able to retrieve high frequency details. For example, in Fig. 5

top-left image it fails to encode the shoulder pad spikes, and in the top-right scene it is not able to model the stones and cracks. D-NeRF, instead, retains high details of the original image in the novel views. This is quite remarkable, considering that each deformation state has only been seen from a single viewpoint.

Figure 6: Time & View Conditioning. Results of synthesising diverse scenes from two novel points of view across time and the learned canonical space. For every scene we also display the learned scene canonical space in the first column.

6.3 Additional Results

We finally show additional results to showcase the wide range of scenarios that can be handled with D-NeRF. Fig. 6 depicts, for four scenes, the images rendered at different time instants from two novel viewpoints. The first column displays the canonical configuration. Note that we are able to handle several types of dynamics: articulated motion in the Tractor scene; human motion in the Jumping Jacks and Warrior scenes; and asynchronous motion of several Bouncing Balls. Also note that the canonical configuration is a sharp and neat scene, in all cases, expect for the Jumping Jacks, where the two arms appear to be blurry. This, however, does not harm the quality of the rendered images, indicating that the network is able warp the canonical configuration so as to maximize the rendering quality. This is indeed consistent with Sec. 6.1 insights on how the network is able to encode shading.

7 Conclusion

We have presented D-NeRF, a novel neural radiance field approach for modeling dynamic scenes. Our method can be trained end-to-end from only a sparse set of images acquired with a moving camera, and does not require pre-computed 3D priors nor observing the same scene configuration from different viewpoints. The main idea behind D-NeRF is to represent time-varying deformations with two modules: one that learns a canonical configuration, and another that learns the displacement field of the scene at each time instant w.r.t. the canonical space. A thorough evaluation demonstrates that D-NeRF is able to synthesise high quality novel views of scenes undergoing different types of deformation, from articulated objects to human bodies performing complex body postures.


This work is supported in part by a Google Daydream Research award and by the Spanish government with the project HuMoUR TIN2017-90086-R, the ERA-Net Chistera project IPALM PCI2019-103386 and María de Maeztu Seal of Excellence MDM-2016-0656. Gerard Pons-Moll is funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) - 409792180 (Emmy Noether Programme, project: Real Virtual Humans)