BPnP: Further Empowering End-to-End Learning with Back-Propagatable Geometric Optimization

09/13/2019
by   Bo Chen, et al.
0

In this paper we present BPnP, a novel method to do back-propagation through a PnP solver. We show that the gradients of such geometric optimization process can be computed using the Implicit Function Theorem as if it is differentiable. Furthermore, we develop a residual-conformity trick to make end-to-end pose regression using BPnP smooth and stable. We also propose a "march in formation" algorithm which successfully uses BPnP for keypoint regression. Our invention opens a door to vast possibilities. The ability to incorporate geometric optimization in end-to-end learning will greatly boost its power and promote innovations in various computer vision tasks.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset