Cut-and-Paste Neural Rendering

10/12/2020
by   Anand Bhattad, et al.
15

Cut-and-paste methods take an object from one image and insert it into another. Doing so often results in unrealistic looking images because the inserted object's shading is inconsistent with the target scene's shading. Existing reshading methods require a geometric and physical model of the inserted object, which is then rendered using environment parameters. Accurately constructing such a model only from a single image is beyond the current understanding of computer vision. We describe an alternative procedure – cut-and-paste neural rendering, to render the inserted fragment's shading field consistent with the target scene. We use a Deep Image Prior (DIP) as a neural renderer trained to render an image with consistent image decomposition inferences. The resulting rendering from DIP should have an albedo consistent with composite albedo; it should have a shading field that, outside the inserted fragment, is the same as the target scene's shading field; and composite surface normals are consistent with the final rendering's shading field. The result is a simple procedure that produces convincing and realistic shading. Moreover, our procedure does not require rendered images or image-decomposition from real images in the training or labeled annotations. In fact, our only use of simulated ground truth is our use of a pre-trained normal estimator. Qualitative results are strong, supported by a user study comparing against the state-of-the-art image harmonization baseline.

READ FULL TEXT

page 1

page 3

page 5

page 6

page 7

page 8

research
06/29/2020

Intrinsic Autoencoders for Joint Neural Rendering and Intrinsic Image Decomposition

Neural rendering techniques promise efficient photo-realistic image synt...
research
02/09/2019

Photorealistic Image Synthesis for Object Instance Detection

We present an approach to synthesize highly photorealistic images of 3D ...
research
04/20/2018

An Approximate Shading Model with Detail Decomposition for Object Relighting

We present an object relighting system that allows an artist to select a...
research
12/24/2019

Rendering Synthetic Objects into Legacy Photographs

We propose a method to realistically insert synthetic objects into exist...
research
12/08/2021

SIRfyN: Single Image Relighting from your Neighbors

We show how to relight a scene, depicted in a single image, such that (a...
research
04/28/2021

DeRenderNet: Intrinsic Image Decomposition of Urban Scenes with Shape-(In)dependent Shading Rendering

We propose DeRenderNet, a deep neural network to decompose the albedo an...
research
07/15/2021

Single-image Full-body Human Relighting

We present a single-image data-driven method to automatically relight im...

Please sign up or login with your details

Forgot password? Click here to reset