People as Scene Probes

07/17/2020
by   Yifan Wang, et al.
5

By analyzing the motion of people and other objects in a scene, we demonstrate how to infer depth, occlusion, lighting, and shadow information from video taken from a single camera viewpoint. This information is then used to composite new objects into the same scene with a high degree of automation and realism. In particular, when a user places a new object (2D cut-out) in the image, it is automatically rescaled, relit, occluded properly, and casts realistic shadows in the correct direction relative to the sun, and which conform properly to scene geometry. We demonstrate results (best viewed in supplementary video) on a range of scenes and compare to alternative methods for depth estimation and shadow compositing.

READ FULL TEXT

page 9

page 10

page 13

page 14

page 21

page 22

page 23

page 24

research
12/24/2019

Automatic Scene Inference for 3D Object Compositing

We present a user-friendly image editing system that supports a drag-and...
research
11/25/2020

Space-time Neural Irradiance Fields for Free-Viewpoint Video

We present a method that learns a spatiotemporal neural irradiance field...
research
08/02/2021

Consistent Depth of Moving Objects in Video

We present a method to estimate depth of a dynamic scene, containing arb...
research
07/23/2018

Peeking Behind Objects: Layered Depth Prediction from a Single Image

While conventional depth estimation can infer the geometry of a scene fr...
research
12/13/2017

Self-Supervised Depth Learning for Urban Scene Understanding

As an agent moves through the world, the apparent motion of scene elemen...
research
08/29/2019

Improving Self-Supervised Single View Depth Estimation by Masking Occlusion

Single view depth estimation models can be trained from video footage us...
research
01/23/2015

Automatic Objects Removal for Scene Completion

With the explosive growth of web-based cameras and mobile devices, billi...

Please sign up or login with your details

Forgot password? Click here to reset