Transformation-Grounded Image Generation Network for Novel 3D View Synthesis

03/08/2017
by   Eunbyung Park, et al.
0

We present a transformation-grounded image generation network for novel 3D view synthesis from a single image. Instead of taking a 'blank slate' approach, we first explicitly infer the parts of the geometry visible both in the input and novel views and then re-cast the remaining synthesis problem as image completion. Specifically, we both predict a flow to move the pixels from the input to the novel view along with a novel visibility map that helps deal with occulsion/disocculsion. Next, conditioned on those intermediate results, we hallucinate (infer) parts of the object invisible in the input image. In addition to the new network structure, training with a combination of adversarial and perceptual loss results in a reduction in common artifacts of novel view synthesis such as distortions and holes, while successfully generating high frequency details and preserving visual aspects of the input image. We evaluate our approach on a wide range of synthetic and real examples. Both qualitative and quantitative results show our method achieves significantly better results compared to existing methods.

READ FULL TEXT

page 4

page 6

page 8

page 13

page 14

page 15

page 16

page 17

research
08/14/2018

Cross-view image synthesis using geometry-guided conditional GANs

We address the problem of generating images across two drastically diffe...
research
07/26/2018

Layer-structured 3D Scene Inference via View Synthesis

We present an approach to infer a layer-structured 3D representation of ...
research
12/06/2022

NeRDi: Single-View NeRF Synthesis with Language-Guided Diffusion as General Image Priors

2D-to-3D reconstruction is an ill-posed problem, yet humans are good at ...
research
07/12/2022

Vision Transformer for NeRF-Based View Synthesis from a Single Input Image

Although neural radiance fields (NeRF) have shown impressive advances fo...
research
02/08/2022

Residual Aligned: Gradient Optimization for Non-Negative Image Synthesis

In this work, we address an important problem of optical see through (OS...
research
12/01/2022

SparseFusion: Distilling View-conditioned Diffusion for 3D Reconstruction

We propose SparseFusion, a sparse view 3D reconstruction approach that u...
research
04/11/2018

View Extrapolation of Human Body from a Single Image

We study how to synthesize novel views of human body from a single image...

Please sign up or login with your details

Forgot password? Click here to reset