3D Ken Burns Effect from a Single Image

09/12/2019
by   Simon Niklaus, et al.
0

The Ken Burns effect allows animating still images with a virtual camera scan and zoom. Adding parallax, which results in the 3D Ken Burns effect, enables significantly more compelling results. Creating such effects manually is time-consuming and demands sophisticated editing skills. Existing automatic methods, however, require multiple input images from varying viewpoints. In this paper, we introduce a framework that synthesizes the 3D Ken Burns effect from a single image, supporting both a fully automatic mode and an interactive mode with the user controlling the camera. Our framework first leverages a depth prediction pipeline, which estimates scene depth that is suitable for view synthesis tasks. To address the limitations of existing depth estimation methods such as geometric distortions, semantic distortions, and inaccurate depth boundaries, we develop a semantic-aware neural network for depth prediction, couple its estimate with a segmentation-based depth adjustment process, and employ a refinement neural network that facilitates accurate depth predictions at object boundaries. According to this depth estimate, our framework then maps the input image to a point cloud and synthesizes the resulting video frames by rendering the point cloud from the corresponding camera positions. To address disocclusions while maintaining geometrically and temporally coherent synthesis results, we utilize context-aware color- and depth-inpainting to fill in the missing information in the extreme views of the camera path, thus extending the scene geometry of the point cloud. Experiments with a wide variety of image content show that our method enables realistic synthesis results. Our study demonstrates that our system allows users to achieve better results while requiring little effort compared to existing solutions for the 3D Ken Burns effect creation.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 4

page 5

page 8

page 10

page 11

page 12

page 13

12/18/2019

SynSin: End-to-end View Synthesis from a Single Image

Single image view synthesis allows for the generation of new views of a ...
11/29/2019

FusionMapping: Learning Depth Prediction with Monocular Images and 2D Laser Scans

Acquiring accurate three-dimensional depth information conventionally re...
10/13/2021

ADOP: Approximate Differentiable One-Pixel Point Rendering

We present a novel point-based, differentiable neural rendering pipeline...
12/10/2019

Neural Point Cloud Rendering via Multi-Plane Projection

We present a new deep point cloud rendering pipeline through multi-plane...
10/14/2017

An Adaptive Framework for Missing Depth Inference Using Joint Bilateral Filter

Depth imaging has largely focused on sensor and intrinsics properties. H...
09/20/2020

3D Modeling and WebVR Implementation using Azure Kinect, Open3D, and Three.js

This paper proposes a method of extracting an RGB-D image usingAzure Kin...
08/09/2020

From depth image to semantic scene synthesis through point cloud classification and labeling: Application to assistive systems

The aim of this work is to provide a semantic scene synthesis from depth...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.