3D Ken Burns Effect from a Single Image

09/12/2019
by   Simon Niklaus, et al.
0

The Ken Burns effect allows animating still images with a virtual camera scan and zoom. Adding parallax, which results in the 3D Ken Burns effect, enables significantly more compelling results. Creating such effects manually is time-consuming and demands sophisticated editing skills. Existing automatic methods, however, require multiple input images from varying viewpoints. In this paper, we introduce a framework that synthesizes the 3D Ken Burns effect from a single image, supporting both a fully automatic mode and an interactive mode with the user controlling the camera. Our framework first leverages a depth prediction pipeline, which estimates scene depth that is suitable for view synthesis tasks. To address the limitations of existing depth estimation methods such as geometric distortions, semantic distortions, and inaccurate depth boundaries, we develop a semantic-aware neural network for depth prediction, couple its estimate with a segmentation-based depth adjustment process, and employ a refinement neural network that facilitates accurate depth predictions at object boundaries. According to this depth estimate, our framework then maps the input image to a point cloud and synthesizes the resulting video frames by rendering the point cloud from the corresponding camera positions. To address disocclusions while maintaining geometrically and temporally coherent synthesis results, we utilize context-aware color- and depth-inpainting to fill in the missing information in the extreme views of the camera path, thus extending the scene geometry of the point cloud. Experiments with a wide variety of image content show that our method enables realistic synthesis results. Our study demonstrates that our system allows users to achieve better results while requiring little effort compared to existing solutions for the 3D Ken Burns effect creation.

READ FULL TEXT

page 1

page 4

page 5

page 8

page 10

page 11

page 12

page 13

research
03/10/2023

3D Cinemagraphy from a Single Image

We present 3D Cinemagraphy, a new technique that marries 2D image animat...
research
08/20/2023

Make-It-4D: Synthesizing a Consistent Long-Term Dynamic Scene Video from a Single Image

We study the problem of synthesizing a long-term dynamic video from only...
research
12/18/2019

SynSin: End-to-end View Synthesis from a Single Image

Single image view synthesis allows for the generation of new views of a ...
research
10/14/2017

An Adaptive Framework for Missing Depth Inference Using Joint Bilateral Filter

Depth imaging has largely focused on sensor and intrinsics properties. H...
research
11/29/2019

FusionMapping: Learning Depth Prediction with Monocular Images and 2D Laser Scans

Acquiring accurate three-dimensional depth information conventionally re...
research
04/02/2023

altiro3D: Scene representation from single image and novel view synthesis

We introduce altiro3D, a free extended library developed to represent re...
research
08/09/2020

From depth image to semantic scene synthesis through point cloud classification and labeling: Application to assistive systems

The aim of this work is to provide a semantic scene synthesis from depth...

Please sign up or login with your details

Forgot password? Click here to reset