Learning Neural Representation of Camera Pose with Matrix Representation of Pose Shift via View Synthesis

by   Yaxuan Zhu, et al.

How to effectively represent camera pose is an essential problem in 3D computer vision, especially in tasks such as camera pose regression and novel view synthesis. Traditionally, 3D position of the camera is represented by Cartesian coordinate and the orientation is represented by Euler angle or quaternions. These representations are manually designed, which may not be the most effective representation for downstream tasks. In this work, we propose an approach to learn neural representations of camera poses and 3D scenes, coupled with neural representations of local camera movements. Specifically, the camera pose and 3D scene are represented as vectors and the local camera movement is represented as a matrix operating on the vector of the camera pose. We demonstrate that the camera movement can further be parametrized by a matrix Lie algebra that underlies a rotation system in the neural space. The vector representations are then concatenated and generate the posed 2D image through a decoder network. The model is learned from only posed 2D images and corresponding camera poses, without access to depths or shapes. We conduct extensive experiments on synthetic and real datasets. The results show that compared with other camera pose representations, our learned representation is more robust to noise in novel view synthesis and more effective in camera pose regression.


page 6

page 13

page 14

page 15

page 16

page 17

page 18

page 19


VMRF: View Matching Neural Radiance Fields

Neural Radiance Fields (NeRF) have demonstrated very impressive performa...

BARF: Bundle-Adjusting Neural Radiance Fields

Neural Radiance Fields (NeRF) have recently gained a surge of interest w...

NeRF–: Neural Radiance Fields Without Known Camera Parameters

This paper tackles the problem of novel view synthesis (NVS) from 2D ima...

Euler angles based loss function for camera relocalization with Deep learning

Deep learning has been applied to camera relocalization, in particular, ...

HashCC: Lightweight Method to Improve the Quality of the Camera-less NeRF Scene Generation

Neural Radiance Fields has become a prominent method of scene generation...

Gaussian Process Priors for View-Aware Inference

We derive a principled framework for encoding prior knowledge of informa...

LENS: Localization enhanced by NeRF synthesis

Neural Radiance Fields (NeRF) have recently demonstrated photo-realistic...

Please sign up or login with your details

Forgot password? Click here to reset