NOVA: NOvel View Augmentation for Neural Composition of Dynamic Objects

08/24/2023
by   Dakshit Agrawal, et al.
0

We propose a novel-view augmentation (NOVA) strategy to train NeRFs for photo-realistic 3D composition of dynamic objects in a static scene. Compared to prior work, our framework significantly reduces blending artifacts when inserting multiple dynamic objects into a 3D scene at novel views and times; achieves comparable PSNR without the need for additional ground truth modalities like optical flow; and overall provides ease, flexibility, and scalability in neural composition. Our codebase is on GitHub.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/11/2019

Optical Flow augmented Semantic Segmentation networks for Automated Driving

Motion is a dominant cue in automated driving systems. Optical flow is t...
research
01/16/2021

GeoSim: Realistic Video Simulation via Geometry-Aware Composition for Self-Driving

Scalable sensor simulation is an important yet challenging open problem ...
research
03/03/2023

Spring: A High-Resolution High-Detail Dataset and Benchmark for Scene Flow, Optical Flow and Stereo

While recent methods for motion and stereo estimation recover an unprece...
research
10/11/2022

Oflib: Facilitating Operations with and on Optical Flow Fields in Python

We present a robust theoretical framework for the characterisation and m...
research
03/24/2023

DistractFlow: Improving Optical Flow Estimation via Realistic Distractions and Pseudo-Labeling

We propose a novel data augmentation approach, DistractFlow, for trainin...
research
03/25/2023

SUDS: Scalable Urban Dynamic Scenes

We extend neural radiance fields (NeRFs) to dynamic large-scale urban sc...
research
05/31/2016

Modeling Photographic Composition via Triangles

The capacity of automatically modeling photographic composition is valua...

Please sign up or login with your details

Forgot password? Click here to reset