GOAL: Generating 4D Whole-Body Motion for Hand-Object Grasping

12/21/2021
by   Omid Taheri, et al.
7

Generating digital humans that move realistically has many applications and is widely studied, but existing methods focus on the major limbs of the body, ignoring the hands and head. Hands have been separately studied but the focus has been on generating realistic static grasps of objects. To synthesize virtual characters that interact with the world, we need to generate full-body motions and realistic hand grasps simultaneously. Both sub-problems are challenging on their own and, together, the state-space of poses is significantly larger, the scales of hand and body motions differ, and the whole-body posture and the hand grasp must agree, satisfy physical constraints, and be plausible. Additionally, the head is involved because the avatar must look at the object to interact with it. For the first time, we address the problem of generating full-body, hand and head motions of an avatar grasping an unknown object. As input, our method, called GOAL, takes a 3D object, its position, and a starting 3D body pose and shape. GOAL outputs a sequence of whole-body poses using two novel networks. First, GNet generates a goal whole-body grasp with a realistic body, head, arm, and hand pose, as well as hand-object contact. Second, MNet generates the motion between the starting and goal pose. This is challenging, as it requires the avatar to walk towards the object with foot-ground contact, orient the head towards it, reach out, and grasp it with a realistic hand pose and hand-object contact. To achieve this the networks exploit a representation that combines SMPL-X body parameters and 3D vertex offsets. We train and evaluate GOAL, both qualitatively and quantitatively, on the GRAB dataset. Results show that GOAL generalizes well to unseen objects, outperforming baselines. GOAL takes a step towards synthesizing realistic full-body object grasping.

READ FULL TEXT

page 1

page 2

page 4

page 6

page 7

page 12

page 13

research
12/19/2021

SAGA: Stochastic Whole-Body Grasping with Contact

Human grasping synthesis has numerous applications including AR/VR, vide...
research
12/14/2022

IMos: Intent-Driven Full-Body Motion Synthesis for Human-Object Interactions

Can we make virtual characters in a scene interact with their surroundin...
research
11/21/2022

FLEX: Full-Body Grasping Without Full-Body Grasps

Synthesizing 3D human avatars interacting realistically with a scene is ...
research
05/08/2023

Synthesize Dexterous Nonprehensile Pregrasp for Ungraspable Objects

Daily objects embedded in a contextual environment are often ungraspable...
research
12/08/2022

Generating Holistic 3D Human Motion from Speech

This work addresses the problem of generating 3D holistic body motions f...
research
09/27/2018

Learning and Acting in Peripersonal Space: Moving, Reaching, and Grasping

The young infant explores its body, its sensorimotor system, and the imm...
research
08/22/2023

HMD-NeMo: Online 3D Avatar Motion Generation From Sparse Observations

Generating both plausible and accurate full body avatar motion is the ke...

Please sign up or login with your details

Forgot password? Click here to reset