-
ARCH: Animatable Reconstruction of Clothed Humans
In this paper, we propose ARCH (Animatable Reconstruction of Clothed Hum...
read it
-
Learning Human Pose Models from Synthesized Data for Robust RGB-D Action Recognition
We propose Human Pose Models that represent RGB and depth images of huma...
read it
-
Dressing 3D Humans using a Conditional Mesh-VAE-GAN
Three-dimensional human body models are widely used in the analysis of h...
read it
-
Implicit Functions in Feature Space for 3D Shape Reconstruction and Completion
While many works focus on 3D reconstruction from images, in this paper, ...
read it
-
3D Multi-bodies: Fitting Sets of Plausible 3D Human Models to Ambiguous Image Data
We consider the problem of obtaining dense 3D reconstructions of humans ...
read it
-
Automatic Generation of Constrained Furniture Layouts
Efficient authoring of vast virtual environments hinges on algorithms th...
read it
-
Learning pose variations within shape population by constrained mixtures of factor analyzers
Mining and learning the shape variability of underlying population has b...
read it
S3: Neural Shape, Skeleton, and Skinning Fields for 3D Human Modeling
Constructing and animating humans is an important component for building virtual worlds in a wide variety of applications such as virtual reality or robotics testing in simulation. As there are exponentially many variations of humans with different shape, pose and clothing, it is critical to develop methods that can automatically reconstruct and animate humans at scale from real world data. Towards this goal, we represent the pedestrian's shape, pose and skinning weights as neural implicit functions that are directly learned from data. This representation enables us to handle a wide variety of different pedestrian shapes and poses without explicitly fitting a human parametric body model, allowing us to handle a wider range of human geometries and topologies. We demonstrate the effectiveness of our approach on various datasets and show that our reconstructions outperform existing state-of-the-art methods. Furthermore, our re-animation experiments show that we can generate 3D human animations at scale from a single RGB image (and/or an optional LiDAR sweep) as input.
READ FULL TEXT
Comments
There are no comments yet.