Reconstructing Articulated Rigged Models from RGB-D Videos

09/06/2016
by   Dimitrios Tzionas, et al.
0

Although commercial and open-source software exist to reconstruct a static object from a sequence recorded with an RGB-D sensor, there is a lack of tools that build rigged models of articulated objects that deform realistically and can be used for tracking or animation. In this work, we fill this gap and propose a method that creates a fully rigged model of an articulated object from depth data of a single sensor. To this end, we combine deformable mesh tracking, motion segmentation based on spectral clustering and skeletonization based on mean curvature flow. The fully rigged model then consists of a watertight mesh, embedded skeleton, and skinning weights.

READ FULL TEXT

page 5

page 7

page 9

page 11

page 12

page 13

page 14

research
02/25/2019

End-to-end Hand Mesh Recovery from a Monocular RGB Image

In this paper, we present a HAnd Mesh Recovery (HAMR) framework to tackl...
research
08/16/2016

Temporally Consistent Motion Segmentation from RGB-D Video

We present a method for temporally consistent motion segmentation from R...
research
03/28/2022

Open-VICO: An Open-Source Gazebo Toolkit for Multi-Camera-based Skeleton Tracking in Human-Robot Collaboration

Simulation tools are essential for robotics research, especially for tho...
research
03/18/2023

Adaptive Multi-source Predictor for Zero-shot Video Object Segmentation

Both static and moving objects usually exist in real-life videos. Most v...
research
02/12/2023

Digital Twin Tracking Dataset (DTTD): A New RGB+Depth 3D Dataset for Longer-Range Object Tracking Applications

Digital twin is a problem of augmenting real objects with their digital ...
research
07/14/2020

Pose2RGBD. Generating Depth and RGB images from absolute positions

We propose a method at the intersection of Computer Vision and Computer ...

Please sign up or login with your details

Forgot password? Click here to reset