Shape correspondences from learnt template-based parametrization
We present a new deep learning approach for matching deformable shapes by using a model which jointly encodes 3D shapes and correspondences. This is achieved by factoring the surface representation into (i) a template, that parameterizes the surface, and (ii) a learnt feature vector that parameterizes the function which transforms the template into the input surface. We show that our network can directly predict the feature vector and thus correspondences for a new input shape, but also that correspondence quality can be significantly improved by an additional regression step. This additional step improves the shape feature vector by minimizing the Chamfer distance between the input and parameterized shape. We show that this produces both a better shape representation and better correspondences. We demonstrate that our simple approach improves state of the art results on the difficult FAUST inter challenge, with an average correspondence error of 2.88cm. We also show results on the real scans from the SCAPE dataset and the synthetically perturbed shapes from the TOSCA dataset, including non-human shapes.
READ FULL TEXT