1 Introduction
Human beings can easily hallucinate what a scene would look like from a different viewpoint, or, for a dynamic scene, in the near future. Automatically performing such a novel view synthesis
, however, remains a challenging task for computer vision systems.
Over the past two decades, the most popular approach to synthesizing new views has been to reconstruct an exact or approximate 3D scene model from multiple views [30, 17, 18, 25, 2]. By contrast, view synthesis from a single image, which can be applied to a broader range of problems, has received much less attention. To overcome the lack of depth information, early methods have proposed to leverage semanticbased priors [12] and geometric cues, such as vanishing points [13], which, while effective, tend to be less robust than their multiview counterparts.
Input view  Groundtruth novel view 
[29]  Ours 
Given an input image of the scene and a relative pose, we seek to predict a new image of the scene observed from this new viewpoint. To this end, and in contrast with stateoftheart methods, we propose to explicitly rely on 3D geometry within a deep learning paradigm. As a consequence, and as evidenced by our results, our predictions better respect the scene structure and are thus more realistic.
Inspired by the recent deep learning revolution in computer vision, several works have proposed to exploit Deep Convolutional Neural Networks (CNNs) to tackle the novel view synthesis problem
[6, 24, 29]. Whether predicting image pixels directly [24], planesweep volumes [6], appearance flow [29], or appearance flow, visibility and the intensity of pixels that were not in the input view [20], these methods, in essence, all aim to solely leverage appearance to predict the flow of each pixel from the input view to the novel view without exploiting the flow of the other pixels. As such, as shown in Fig. 1, they tend to generate artefacts, such as distorted local structures in the synthesized images.In this paper, we propose to explicitly account for 3D geometry, and thus respect 3D scene structure, in the singleimage novel view synthesis process. To this end, we approximate the scene by a fixed number of planes, and learn to predict corresponding homographies that, once applied to the input image, generate a set of candidate images for the novel view. We then learn to predict a selection map corresponding to each homography, which, after warping, is used to combine the candidate images to generate the novel view. In essence, our homographybased approach enforces geometric constraints on the flow field, thus modeling scene structure. Our approach can be thought of as a divideandconquer strategy that allows us to encode a 3D geometric prior while learning the image transformation.
To achieve this, we develop a novel deep architecture consisting of two subnetworks. The first one estimates pixelwise depth and normals in the input image, which, in conjunction with the relative pose between the input and novel views, are then used to estimate one homography for each planar region in the scene. These homographies then let us produce a set of warped input images. The second subnetwork aims to predict a pixelwise probability, or selection map encoding to which homography each input pixel should be associated. These maps are then warped with the corresponding predicted homographies, and the novel view is generated by combining the warped input images according to the warped selection maps. To account for pixels not in the input view and potential blur arising from the combination of multiple warped images, inspired by
[20], we further propose to refine the synthesized image with an encoderdecoder network with skip connections. As evidenced by Fig. 1, our complete framework yields realisticlooking novel views.We demonstrate the effectiveness of our approach on the challenging KITTI odometry dataset [9] and ScanNet [4], depicting complex urban outdoor scenes and indoor scenes, respectively. Thanks to our geometrybased reasoning, our method not only outperforms the stateoftheart appearance flow technique of [29] quantitatively, but also yields visually more realistic predictions.
2 Related Work
Over the years, two main classes of methods have been proposed to address the novel view synthesis problem: those that rely on geometry, and the more recent ones that exploit deep learning. Below, we review the methods belonging to these two classes.
Geometrybased view synthesis.
Originally, the most popular approach to view synthesis consisted of explicitly modeling 3D information, via either a detailed 3D model, or an approximate representation of the 3D scene structure. This idea was introduced in [18] more than two decades ago, by relying on multiview stereo and a warping strategy. With the impressive progress of multiview 3D reconstruction techniques [7], highly detailed models can be obtained, and novel views generated by making use of the target pose given as input. In complex scenes, however, this process remains challenging due to, e.g., occlusions leading to holes in the 3D models. In this context, [2] first reconstructs a partial scene from multiple images, and then synthesizes depth to fill in the missing pixels and correct the unreliable regions. Instead of relying on dense reconstruction, [30] leverages sparse points obtained from structurefrommotion in conjunction with segmented image regions, each of which is assumed to be planar and associated to a homography to warp the input image. While effective in their context, these methods are inapplicable to the scenario where a single image is available to synthesize a novel view.
Only little work has been done to leverage geometry for singleimage novel view synthesis. In particular, [13] models the scene as an axisaligned box, and requires a user to annotate the box coordinates, vanishing points and foreground to be able to render the model from a different viewpoint. In [12], the image is labeled into three geometric classes, which defines an approximate scene structure that can be rendered from a new viewpoint. These methods, however, only model a very coarse structure of the scene, and therefore cannot yield realistic novel views. By contrast, the recent work of [22] leverages a large collection of 3D models to infer the one closest to an input image. While effective for individual objects, this approach does not translate well to complex, realworld scenes with rich structures and dynamic motion, such as urban ones.
View synthesis from CNNs.
With the advent of deep learning in computer vision, CNNs have recently been investigated to generate novel views. In particular, [6]
proposes to synthesize the novel image from neighboring views. To this end, a planesweep volume, encoding a set of possible image appearances, was used as input to a network whose goal was to select the correct pixel appearance in the volume. This framework, however, requires a large memory and was only evaluated for view interpolation. Similarly,
[15] tackles the view interpolation task from a pair of images, but aims to learn to rectify the two images and predict pixels correspondences. The novel view is generated by fusing the pixels of the image pair using the estimated correspondence. In contrast to these methods, we focus on singleimage view synthesis.In this context, [16] trains a variational autoencoder to decouple the image into hidden factors, constrained to correspond to viewpoint and lighting conditions. While this network can generate an image from a new viewpoint by manipulating the hidden factors, it is mostly restricted to small rotations. In [24], an encoderdecoder network is trained to directly synthesize the pixels of the new view from the input image and the relative pose. While this network was shown to handle large rotations, the predicted images are typically blurry. Instead of directly synthesizing the image, [29] proposes to predict the displacements of the pixels from the input view to the new one, named the appearance flow. While this method yields sharper results, by predicting the displacements in a pixelwise manner, it doesn’t account for the scene structure, and thus, as illustrated in Fig. 1, introduces unrealistic artefacts. The recent work of [20] builds upon appearance flow by additionally predicting a visibility map, whose goal is to reflect the visibility constraints arising from a 3D object shape. During training, the groundtruth visibility maps are obtained by making use of 3D CAD models of the objects of interest. While this indeed exploits 3D geometry, at test time, the synthesis process neither explicitly encodes notions of geometry nor preserves local geometric structures in the new image. Furthermore, its use of 3D CAD models makes this approach bettersuited to singleobject view synthesis than to tackling complex realworld scenes.
By contrast, here, we explicitly leverage 3D geometry during the synthesis of the novel view, by developing a deep learning framework that exploits the notion of local homographies. As illustrated by Fig. 1, our geometryaware deep learning strategy yields realistic predictions that better reflect the scene structure.
Note that some work has focused on the specific case of stereo view synthesis, that is, generating an image of one view from that of the other in a stereo setup [26]. While effective, this does not generalize to arbitrary novel views, since not all 3D information can be explained by disparity. Furthermore, view synthesis has been employed as supervision for depth estimation [8, 28]. However, novel views generated from predicted depth maps are typically highly incomplete, and, while suitable for depth estimation, not realisticlooking. Here, we focus on synthesizing realistic novel views with general pose variations.
3 Our Approach
Our goal is to explicitly leverage information about the 3D scene structure to perform singleimage novel view synthesis. To this end, we assume that the scene can be represented with multiple planes and learn to predict their respective homographies, which let us generate a set of candidate images in the new view. We additionally learn to estimate selection maps corresponding to the homographies, which encode to which homography each input pixel should be associated. Warping these maps and using them in conjunction with the candidate new view images lets us synthesize the novel view. We then complete the regions that were unseen in the input view, and thus cannot be synthesized with this strategy, using an encoderdecoder network similar to the generator of [20]. Below, we first introduce our regionaware geometrictransform network, and then discuss this encoderdecoder refinement.
3.1 Regionaware Geometrictransform Network
To learn to predict a novel view from a single image while exploiting the 3D geometry of the scene, we develop the network shown in Fig. 2. This architecture consists of two subnetworks. The bottom one first predicts pixelwise depth and normals from a single image in two independent streams. These predictions are then used, together with region masks extracted from the input image and the relative pose between the input view and the novel one, to compute multiple homographies, which we employ to warp the input image, thus generating candidate synthesized views. The second subnetwork, at the top of Fig. 2, predicts selection masks indicating, for each pixel, to which homography it should be associated. We then compute the novel view by assembling the candidate synthesized images according to the warped selection masks. Below, we describe these different stages in more detail.
Depth and Normal Prediction.
We use standard fullyconvolutional architectures to predict pixelwise depth and normal maps separately. The details of these architectures are provided in the experiments section.
Generating Homographies.
Since we represent the scene as a set of planar surfaces, a novel view can be obtained by applying one homography to each surface. For one plane, a homography can be computed from its depth and normal, given the desired relative pose, i.e., 3D rotation and translation, and camera intrinsic parameters. To model different planes, we make use of a segmentation of the input image into regions, referred to as seed regions and described in Section 3.1.2, to pool the abovementioned pixelwise depth and normal estimates.
More specifically, let be an
binary tensor encoding
segmentation masks obtained from the input image . Furthermore, let us denote by the binary mask corresponding to the segment. Assuming that each segment is planar, we approximate its normal as(1) 
where denotes the set of all pixel locations, and corresponds to the normal estimate at location . We then normalize to have unit norm.
A plane with normal
can be defined by a vector
, such that any 3D point on the plane satisfies . While our average normal estimate provides us with the first 3 parameters, we still need to compute . To this end, let us consider the center of region , with coordinates . We approximate the depth at the center location as(2) 
where corresponds to the depth estimate at location This allows us to increase robustness to noise in the predicted depth map compared to directly using . Given the matrix of camera intrinsic parameters , the corresponding 3D point can be expressed as
(3) 
By making use of the plane constraint, we can estimate the last parameter as .
Finally, let . Given the relative rotation matrix and translation vector between the input and novel views, the homography for region can be expressed as
This lets us compute a homography for every seed region.
Inverse Image Warping.
Each resulting homography can be applied to the pixels of the input (source) image. For each source pixel , this can be written as
(4) 
with the pixel location in homogeneous coordinates, and the corresponding scalar. While the result of this operation will indeed correspond to a location in the target image (ignoring the fact that some will lie outside the image range), these locations will not correspond to exact, integer pixel coordinates. In our context of generating a novel view, this would significantly complicate the task of obtaining the intensity value at each target pixel, which would require combining the intensities of nearby transformed locations, whose number would vary for each target pixel.
To address this, instead of following a forward warping strategy (from source image to target image), we rely on an inverse warping (from target image to source image). Specifically, for every target pixel location , we obtain the corresponding source location by relying on the inverse homography as . We then compute the target intensity value at pixel by bilinear interpolation as
(5) 
where is the input source image, and denotes the 4pixel neighborhood of , which itself is predicted by the inverse homography.
Selection Network.
As discussed below, we generate the novel view by assembling the candidate target images obtained as described above. To this end, we develop a selection network to predict planar region masks from the input image and seed region masks (Section 3.1.2). More precisely, for each seed region, we aim to predict a soft selection map indicating the likelihood for every input pixel to be associated to the corresponding homography.
Specifically, the structure of our selection network follows that of the first 4 convolutional blocks of VGG16 [23]. As shown in Fig 3, each seed region mask is used to maxpool the corresponding 4 feature maps. We then concatenate the resulting 4 features to form a hypercolumn [11] feature, which we convolve with the concatenated complete feature maps at the lower resolution. This yields a lowresolution heat map, which we upsample to the original image size. The resulting heat map indicates a notion of similarity between the features at every pixel and the one pooled over the seed region. This procedure is performed individually for the seed regions, but using shared network parameters. Note that the resulting selection maps are defined in the input view, and we thus apply our inverse warping procedure to compute them in the novel view.
Novel View Prediction.
Given the selection maps , we first compute a normalized transformed mask for the novel view as
(6) 
Note that the resulting transformed masks are not binary, but rather provide weights to combine the estimated target images. To account for the fact that some pixels will be warped outside the input image with all homographies, we make use of a small constant , which prevents division by 0 in the normalization process and yields uniform weights for such pixels. In our experiments, we set . We compute the novel view as
(7) 
Note that some of the pixels in the output view will be mapped outside the input image by all homographies. In the simplest version of our approach, we fill in the intensity of each such pixel by using the value at the nearest pixel in the input image. In Section 3.2, we introduce a refinement network that produces more realistic predictions.
3.1.1 Learning
The novel view predicted using Eq. 7 is a function of the homographies, which themselves are functions of the normal and depth estimates, and of the selection masks, which in turn depend on the depth and normal branch parameters , and selection network parameters , respectively. Altogether, the prediction can then be thought of as a function of the parameters given an input image , and a relative pose , encompassing the 3D rotation, translation and camera intrinsics, and the segmentation seed region masks .
All the operations described above are differentiable. The least obvious cases are the bilinear interpolations of Eqs. 5 and 6, and the use of the inverse homography. For the former ones, we refer the reader to [14], who showed that the (sub)gradient of bilinear interpolation with respect to , could be efficiently computed. For the latter case, we propose to exploit the ShermanMorrison formula [21], provided in the supplementary material, to avoid having to explicitly compute the inverse of the homography.
In our context, this formula lets us express the inverse of the homography analytically as follows. Let
(8) 
Then, we have
. This formulation makes it easy to compute the gradient of the inverse homography w.r.t. the estimated depth and normals, and thus to train our model using backpropagation.
To this end, we make use of an loss between the true target image and the estimated one. Given training samples, learning can then be expressed as
(9) 
where is the groundtruth novel view, and where, with a slight abuse of notation, we denote the segmentation mask for sample as . More details about optimization are provided in Section 4.
3.1.2 Obtaining Seed Regions
Throughout our framework, we assume to be given segmentation masks as input, corresponding to the planes we use to represent the scene. To extract these masks, we make use of the following simple, yet effective strategy. We first oversegment the image into superpixels using SLIC [1]. For each superpixel, we then extract its RGB value and center location as features and use Kmeans to cluster the superpixels into regions. This strategy has the advantage over learningbased segmentation masks of generating compact regions, which are better suited to estimating the corresponding plane parameters. Furthermore, as evidenced by our experiments, it allows us to obtain accurate synthesized views that respect the scenes 3D structure.
3.2 Refinement Network
Our regionaware geometrictransformation network produces a novel view image that preserves the local geometric structures of the scene. While geometric transformations can synthesize regions that appear both in the input and novel views, it cannot handle the regions that are only present in the novel view, i.e., that were hidden in the input view. To address this, inspired by [20], we make use of the encoderdecoder refinement network depicted by Fig. 4. While the structure of this network is the same as in [20]
, we make use of a different, simpler loss function to train it.
Specifically, let denote the mean pixel error. We then define the loss of our refinement network as
(10) 
where is a feature loss. That is, it corresponds to an
loss between features extracted from a fixed VGG19 network, pretrained for classification on ImageNet. In particular, we concatenate features from the ‘conv1
2’, ‘conv22’, ‘conv32’, ‘conv42’ and ‘conv52’ layers of VGG19. This strategy has proven effective in [3]in the context of imagetoimage translation. In particular, it has the advantage over
[20] of not relying on a generative adversarial network, which are known to be hard to train. As shown in our results, this refinement network not only hallucinates the missing parts of the synthesized images, but it also removes the blur arising from combining multiple warped images.4 Experiments
We evaluate our approach both quantitatively and qualitatively on the challenging urban KITTI odometry dataset [9], which depicts complex scenes with rich structure and dynamic objects, and on the large indoor scene ScanNet dataset [4], which covers diverse scene types. We compare our approach with the stateoftheart singleimage view synthesis algorithm of [29] for realworld scenes^{1}^{1}1Note that, as discussed in Section 2, the transformationgrounded network of [20] focuses on singleobject novel view synthesis.. Furthermore, we also report the results of a depthbased baseline consisting of using the predictions of our depth stream warped to the new pose, followed by bicubic interpolation to obtain a complete image.
4.1 Experimental Setup
KITTI Dataset. For the comparison with [29] to be fair, we adopt the same data splits as them. Namely, we use the video sequences with index 0 to 8 as training set, and 9 to 10 as test set. We then generate our training and test pair in the following way, similar to that of [29]: For each image in a sequence, we randomly sample a frame number for the input image and for the target image such that they are separated by at most frames.
ScanNet Dataset. We make use of the training, validation and test splits provided with ScanNet. In particular, we use 405 training sequences to learn our model and 312 sequences from the test set for testing. We form the inputtarget pairs in the same manner as for KITTI. In total, we use 30000 training pairs and 5000 test pairs.
We resize the images from both datasets to to match that of [29]. To obtain the segmentation masks, we first oversegment each image into SLIC [1] superpixels and cluster them into regions, as described in Section 3.1.2. This represents a good tradeoff between the accuracy of our piecewise planar representation on the training data and the memory consumption of our method. In practice, this proved sufficient to yield realistic novel views.
Input view  App. Flow [29]  OursGeo  OursFull  Groundtruth 

4.2 Training Procedure
We train our model in a stagewise manner: First, the depth and normal branches, then the selection network given fixed depth and normal branches, and finally the refinement network while rest of the framework is fixed. We tried to then finetune the entire network endtoend, but did not observe any significant improvement.
Training the depth and normal networks.
For the indoor ScanNet dataset, we were able to directly use the network of [5], which predicts both depth and normals. This network was pretrained on NYUv2 [19], and we simply finetuned it on our data. In particular, since ScanNet does not provide groundtruth normals, we fit a plane to each SLIC superpixel, and assigned the corresponding normal to all its pixels. The finetuned network yields a relative depth error of 0.236. We do not report the normal error, since the groundtruth normals were obtained from the depth maps.
For KITTI, we were unfortunately unable to train an equivalent model from scratch. Therefore, we relied on the simpler encoderdecoder network of Fig. 5, which is more compact and easier to train. To this end, we made use of the loss for the inverse depth and of the negative inner product as a normal loss. Note that KITTI only provides sparse groundtruth depth maps. While this is sufficient to train the depth branch, it does not allow us to generate groundtruth normals as in ScanNet. To this end, we used the stereo framework of [27] to generate dense depth maps, which we used, in turn, to obtain normal maps using superpixels. The final depth network yields a relative error of 0.274.
Note that we analyze the influence of the depth and normal prediction accuracy on our final novel view synthesis results in our results section.
Method  KITTI  ScanNet 

App. flow [29]  0.471   
Depthbranch  0.668  0.217 
OursGeo  0.340  0.167 
OursFull  0.345  0.176 
Training the selection network.
The selection network takes the predicted depth and normals, together with the image, relative pose and seed regions, as input to synthesize the novel view. Since we do not have groundtruth labels for the selection maps, we therefore directly trained the selection network using the mean pixel error as a loss.
Training the refinement network.
The refinement network aims to improve an initial synthesized view. We train it using the loss of Eq. 10, with .
We implemented our model in tensorflow and trained it on two NVIDIA Tesla P100, each with 16GB memory. We used minibatches of size 10, and employed the ADAM solver with a learning rate of
,and the default values and . We will make our code publicly available upon acceptance of the paper.4.3 Results
In Table 1, we compare our approach, both without (OursGeo) and with (OursFull) refinement network, with the stateoftheart appearance flow technique of [29] on KITTI and ScanNet, based on the mean pixel error metric. Note that our approach outperforms the baseline that uses our depth estimates, without explicitly modeling the scene structure, by a large margin. This evidences the importance of accounting for 3D scene structure. Our approach also significantly outperforms the stateoftheart appearance flow method on KITTI.^{2}^{2}2Note that, because the training code for appearance flow is not available, we had to reimplement it, and despite confirming that our implementation was correct using the KITTI dataset, we were unable to make training converge on ScanNet. This again shows the benefits of modeling geometry, as done by our regionaware geometrictransform network. Interestingly, the refinement network tends to slightly degrade the novel view accuracy.
Input view 


OursGeo 

OursFull 

Groundtruth 
gtDep  gtNor  estDep  estNor  Seed  SelMap  
✓  ✓  ✗  ✗  ✓  ✗  0.357 
✓  ✓  ✗  ✗  ✗  ✓  0.329 
✗  ✗  ✓  ✓  ✓  ✗  0.373 
✗  ✗  ✓  ✓  ✗  ✓  0.340 
However, when looking at the qualitative comparison in Figs. 1, 6 and 7, we can see that our complete model (OursFull) yields more realistic novel views than both OursGeo and appearance flow [29]. Note that, by not leveraging structure, appearance flow yields to unrealistic artifacts. By contrast, the results of our approach that exploits 3D geometry look more natural. This, for instance, can be observed by looking at the bottomright corner of the first image in Fig. 6, where we better model the shape of the object, and at the buildings in the other images.
In Table 2, we analyze the influence of the quality of the depth and normal estimates, and the effect of learning the selection maps. In particular, we compare the error obtained when using the groundtruth depth and normals instead of the predicted ones, and when using the seed regions as ’hard’ segmentation masks instead of the learnt selection maps. In both cases, the best results are obtained by using the groundtruth depth and normals in conjunction with our selection maps, followed by using the estimated depth and normals with our selection maps. This shows (i) the importance of learning the combination of the multiple synthesized candidates; and (ii) that the results of our approach will further improve as progress in singleimage depth and normal prediction is made. A similar table for ScanNet is provided in the supplementary material.
In Fig. 13, we illustrate what the selection network learns. To this end, we show the initial seed region overlaid with input image, and the likelihood of the pixels to be associated to this plane, predicted by the selection network. From the examples, we can see that the selection network extends the initial seed regions to larger planes of semantically and visually coherent pixels, such as a larger tree regions.
Input image  
Seed Region  Selection Map  Overlay Image 
5 Conclusion
We have introduced a geometryaware deep learning framework for novel view synthesis from a single image. Our approach models the scene with a fixed number of planes, and learns to predict homographies, which, in conjunction with a predicted selection map and a desired relative pose, let us generate the novel view. Our experiments on the challenging KIITI and ScanNet datasets have demonstrated the benefits of our approach; by leveraging 3D geometry, our method yields predictions that better match the scene structure, and thus outperforms the stateoftheart singleimage novel view synthesis techniques. Training the depth branch of our framework currently relies on groundtruth depth maps. In the future, we will investigate the use of weaklysupervised depth prediction methods [8, 10, 28] that only exploit two views to perform this task.
Acknowledgments This work was done when the first author was working in Data61, CSIRO, Australia.The Titan X used for this research was donated by the NVIDIA Corporation.
References
 [1] R. Achanta, A. Shaji, K. Smith, A. Lucchi, P. Fua, and S. Süsstrunk. Slic superpixels compared to stateoftheart superpixel methods. IEEE transactions on pattern analysis and machine intelligence, 34(11):2274–2282, 2012.
 [2] G. Chaurasia, S. Duchene, O. SorkineHornung, and G. Drettakis. Depth synthesis and local warps for plausible imagebased navigation. ACM Transactions on Graphics (TOG), 32(3):30, 2013.
 [3] Q. Chen and V. Koltun. Photographic image synthesis with cascaded refinement networks. In ICCV, 2017.

[4]
A. Dai, A. X. Chang, M. Savva, M. Halber, T. Funkhouser, and M. Nießner.
Scannet: Richlyannotated 3d reconstructions of indoor scenes.
In
Proc. Computer Vision and Pattern Recognition (CVPR), IEEE
, 2017.  [5] D. Eigen and R. Fergus. Predicting depth, surface normals and semantic labels with a common multiscale convolutional architecture. In Proceedings of the IEEE International Conference on Computer Vision, pages 2650–2658, 2015.
 [6] J. Flynn, I. Neulander, J. Philbin, and N. Snavely. Deepstereo: Learning to predict new views from the world’s imagery. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 5515–5524, 2016.
 [7] Y. Furukawa and J. Ponce. Accurate, dense, and robust multiview stereopsis. IEEE transactions on pattern analysis and machine intelligence, 32(8):1362–1376, 2010.
 [8] R. Garg, G. Carneiro, and I. Reid. Unsupervised cnn for single view depth estimation: Geometry to the rescue. In European Conference on Computer Vision, pages 740–756. Springer, 2016.
 [9] A. Geiger, P. Lenz, and R. Urtasun. Are we ready for autonomous driving? the kitti vision benchmark suite. In Conference on Computer Vision and Pattern Recognition (CVPR), 2012.
 [10] C. Godard, O. Mac Aodha, and G. J. Brostow. Unsupervised monocular depth estimation with leftright consistency. arXiv preprint arXiv:1609.03677, 2016.
 [11] B. Hariharan, P. Arbeláez, R. Girshick, and J. Malik. Hypercolumns for object segmentation and finegrained localization. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 447–456, 2015.
 [12] D. Hoiem, A. A. Efros, and M. Hebert. Automatic photo popup. ACM transactions on graphics (TOG), 24(3):577–584, 2005.
 [13] Y. Horry, K.I. Anjyo, and K. Arai. Tour into the picture: using a spidery mesh interface to make animation from a single image. In Proceedings of the 24th annual conference on Computer graphics and interactive techniques, pages 225–232. ACM Press/AddisonWesley Publishing Co., 1997.
 [14] M. Jaderberg, K. Simonyan, A. Zisserman, and K. Kavukcuoglu. Spatial transformer networks. In Advances in Neural Information Processing Systems, pages 2017–2025, 2015.
 [15] D. Ji, J. Kwon, M. McFarland, and S. Savarese. Deep view morphing. In Proc. Computer Vision and Pattern Recognition (CVPR), IEEE, 2017.
 [16] T. D. Kulkarni, W. F. Whitney, P. Kohli, and J. Tenenbaum. Deep convolutional inverse graphics network. In Advances in Neural Information Processing Systems, pages 2539–2547, 2015.
 [17] F. Liu, M. Gleicher, H. Jin, and A. Agarwala. Contentpreserving warps for 3d video stabilization. ACM Transactions on Graphics (TOG), 28(3):44, 2009.
 [18] L. McMillan and G. Bishop. Plenoptic modeling: An imagebased rendering system. In Proceedings of the 22nd annual conference on Computer graphics and interactive techniques, pages 39–46. ACM, 1995.
 [19] P. K. Nathan Silberman, Derek Hoiem and R. Fergus. Indoor segmentation and support inference from rgbd images. In ECCV, 2012.
 [20] E. Park, J. Yang, E. Yumer, D. Ceylan, and A. C. Berg. Transformationgrounded image generation network for novel 3d view synthesis. In CVPR, 2017.
 [21] W. H. Press. Numerical recipes 3rd edition: The art of scientific computing. Cambridge university press, 2007.
 [22] K. Rematas, C. Nguyen, T. Ritschel, M. Fritz, and T. Tuytelaars. Novel views of objects from a single image. arXiv preprint arXiv:1602.00328, 2016.
 [23] K. Simonyan and A. Zisserman. Very deep convolutional networks for largescale image recognition. arXiv preprint arXiv:1409.1556, 2014.
 [24] M. Tatarchenko, A. Dosovitskiy, and T. Brox. Multiview 3d models from single images with a convolutional network. In European Conference on Computer Vision, pages 322–337. Springer, 2016.
 [25] O. J. Woodford, I. D. Reid, P. H. Torr, and A. W. Fitzgibbon. On new view synthesis using multiview stereo. In BMVC, pages 1–10, 2007.
 [26] J. Xie, R. Girshick, and A. Farhadi. Deep3d: Fully automatic 2dto3d video conversion with deep convolutional neural networks. In European Conference on Computer Vision, pages 842–857. Springer, 2016.
 [27] J. Zbontar and Y. LeCun. Computing the stereo matching cost with a convolutional neural network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1592–1599, 2015.
 [28] T. Zhou, M. Brown, N. Snavely, and D. G. Lowe. Unsupervised learning of depth and egomotion from video. arXiv preprint arXiv:1704.07813, 2017.
 [29] T. Zhou, S. Tulsiani, W. Sun, J. Malik, and A. A. Efros. View synthesis by appearance flow. In European Conference on Computer Vision, pages 286–301. Springer, 2016.
 [30] Z. Zhou, H. Jin, and Y. Ma. Planebased content preserving warps for video stabilization. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2299–2306, 2013.
6 ShermanMorrison formula
We first provide more detail for the ShermanMorrison formula, which allows us to explicitly compute the inverse of homographies. The ShermanMorrison formula can be stated as follows:
Theorem 1
Assume A is invertible, and u and v are column vectors. Furthermore, assume . Given
(11) 
the inverse can be obtained as
(12) 
In our context, the homography is defined as
Let us first ignore and concentrate on the central part
(13) 
where is a rotation matrix and is thus invertible, i.e., . Therefore, Eq. 13 satisfies the conditions of in Eq. 11, and the inverse can be written as
Reintroducing , and following the standard rule for matrix product inversion, lets us write the inverse as
7 Experiments
In this section, we provide additional results on the two datasets. We further illustrate the synthesized images obtained from the homographies generated by our method, and show additional examples of the selection maps our network predicts. We then provide the visualisation of our estimated depth and normal maps for KITTI dataset and discuss failure cases of our approach.
7.1 Additional Results
We provide additional qualitative results on the KITTI dataset in Figs. 9, and 10 and on the ScanNet dataset in Fig. 11. As those in the main paper, they clearly illustrate the benefits of our approach over the stateoftheart appearance flow baseline [29]; specifically, accounting for geometry lets us produce much more realistic novel views. Note also that our complete approach (OursFull), with the refinement network, typically yields sharper results than our basic framework without refinement (OursGeo). This can be seen, e.g., in the third to seventh rows of Fig. 9.
In Table 3, we analyze the influence of the quality of the depth and normal estimates and of learning the selection maps on ScanNet. Note that, compared to Table 2 in the main paper which shows a similar analysis for KITTI, we eliminated the factor ‘gtNor’ because it is computed from ‘gtDep’. In essence, the behavior is the same as for KITTI. The best results are obtained with the groundtruth depth maps, which leaves room for our method to improve as progress in depth estimation is made. More importantly, our learnt selection maps give a significant boost to our results, whether using groundtruth depth or estimated one.
Input view  App. Flow [29]  OursGeo  OursFull  Groundtruth 

Input view  App. Flow [29]  OursGeo  OursFull  Groundtruth 

Input view 


OursGeo 

OursFull 

Groundtruth 
7.2 Synthesized Candidate Images
In Fig. 12, we show the synthesized images obtained from our predicted homographies for one input image. When compared with the groundtruth novel view, we can see that different homographies account for the motion of different regions in the image. For instance, the homography corresponding to the topleft image accounts for the motion of the road. By contrast, the homography corresponding to the bottomright image models the motion of the buildings. Correctly combining these images then allows us to obtain a realistic novel view, as shown in the top row of Fig. 12.
Input image  Groundtruth novel view  OursFull  OursGeo 

Synthesized images  
gtDep  estDep  estNor  Seed  SelMap  
✓  ✗  ✓  ✓  ✗  0.174 
✓  ✗  ✓  ✗  ✓  0.159 
✗  ✓  ✓  ✓  ✗  0.184 
✗  ✓  ✓  ✗  ✓  0.167 
7.3 Selection Maps
In Fig. 13, we provide additional results from our selection network. While our seed regions typically cover only parts of the road, trees, sky, and buildings, our predicted selection maps can extend them to larger planar and semanticallycoherent regions.
Input image  Input Image  
Seed Region  Selection Map  Overlay Image  Seed Region  Selection Map  Overlay Image 

7.4 Depth and Normal prediction
In Fig. 14, we provide the visualisation of the estimated depth and normal map from our network for sampled images from KITTI test set. It shows that our estimation can well capture the scene structure compared with the ground truth.
7.5 Failure Cases
In Fig. 15, we show typical failure cases of our approach. The failure cases are mainly due to i) moving objects, whose locations cannot be explained by camera motion (see the first row); 2) the need to hallucinate large portions of the image (e.g., because of backward motion), in which case our method tends to generate background and miss foreground objects (see the last two examples).
Image  GTdepth  Estdepth  Estnormal 

input view  OursGeo  OursFull  Groundtruth 

Comments
There are no comments yet.