ReShader: View-Dependent Highlights for Single Image View-Synthesis

09/19/2023
by   Avinash Paliwal, et al.
0

In recent years, novel view synthesis from a single image has seen significant progress thanks to the rapid advancements in 3D scene representation and image inpainting techniques. While the current approaches are able to synthesize geometrically consistent novel views, they often do not handle the view-dependent effects properly. Specifically, the highlights in their synthesized images usually appear to be glued to the surfaces, making the novel views unrealistic. To address this major problem, we make a key observation that the process of synthesizing novel views requires changing the shading of the pixels based on the novel camera, and moving them to appropriate locations. Therefore, we propose to split the view synthesis process into two independent tasks of pixel reshading and relocation. During the reshading process, we take the single image as the input and adjust its shading based on the novel camera. This reshaded image is then used as the input to an existing view synthesis method to relocate the pixels and produce the final novel view image. We propose to use a neural network to perform reshading and generate a large set of synthetic input-reshaded pairs to train our network. We demonstrate that our approach produces plausible novel view images with realistic moving highlights on a variety of real world scenes.

READ FULL TEXT

page 1

page 3

page 4

page 5

page 6

page 8

research
03/30/2023

Consistent View Synthesis with Pose-Guided Diffusion Models

Novel view synthesis from a single image has been a cornerstone problem ...
research
08/12/2021

PixelSynth: Generating a 3D-Consistent Experience from a Single Image

Recent advancements in differentiable rendering and 3D reasoning have dr...
research
06/21/2021

Moving in a 360 World: Synthesizing Panoramic Parallaxes from a Single Panorama

We present Omnidirectional Neural Radiance Fields (OmniNeRF), the first ...
research
01/19/2021

Deep View Synthesis via Self-Consistent Generative Network

View synthesis aims to produce unseen views from a set of views captured...
research
06/24/2014

Image Completion for View Synthesis Using Markov Random Fields and Efficient Belief Propagation

View synthesis is a process for generating novel views from a scene whic...
research
04/02/2023

altiro3D: Scene representation from single image and novel view synthesis

We introduce altiro3D, a free extended library developed to represent re...
research
11/26/2021

NeRF in the Dark: High Dynamic Range View Synthesis from Noisy Raw Images

Neural Radiance Fields (NeRF) is a technique for high quality novel view...

Please sign up or login with your details

Forgot password? Click here to reset