PACE: Data-Driven Virtual Agent Interaction in Dense and Cluttered Environments

03/24/2023
by   James Mullen, et al.
0

We present PACE, a novel method for modifying motion-captured virtual agents to interact with and move throughout dense, cluttered 3D scenes. Our approach changes a given motion sequence of a virtual agent as needed to adjust to the obstacles and objects in the environment. We first take the individual frames of the motion sequence most important for modeling interactions with the scene and pair them with the relevant scene geometry, obstacles, and semantics such that interactions in the agents motion match the affordances of the scene (e.g., standing on a floor or sitting in a chair). We then optimize the motion of the human by directly altering the high-DOF pose at each frame in the motion to better account for the unique geometric constraints of the scene. Our formulation uses novel loss functions that maintain a realistic flow and natural-looking motion. We compare our method with prior motion generating techniques and highlight the benefits of our method with a perceptual study and physical plausibility metrics. Human raters preferred our method over the prior approaches. Specifically, they preferred our method 57.1 the state-of-the-art method using existing motions, and 81.0 versus a state-of-the-art motion synthesis method. Additionally, our method performs significantly higher on established physical plausibility and interaction metrics. Specifically, we outperform competing methods by over 1.2 in terms of the non-collision metric and by over 18 metric. We have integrated our interactive system with Microsoft HoloLens and demonstrate its benefits in real-world indoor scenes. Our project website is available at https://gamma.umd.edu/pace/.

READ FULL TEXT

page 1

page 4

page 5

page 6

page 7

page 8

research
09/13/2022

Placing Human Animations into 3D Scenes by Learning Interaction- and Geometry-Driven Keyframes

We present a novel method for placing a 3D human animation into a 3D sce...
research
08/18/2021

Stochastic Scene-Aware Motion Prediction

A long-standing goal in computer vision is to capture, model, and realis...
research
12/14/2022

IMos: Intent-Driven Full-Body Motion Synthesis for Human-Object Interactions

Can we make virtual characters in a scene interact with their surroundin...
research
08/12/2020

Generating Person-Scene Interactions in 3D Scenes

High fidelity digital 3D environments have been proposed in recent years...
research
08/24/2023

ROAM: Robust and Object-aware Motion Generation using Neural Pose Descriptors

Existing automatic approaches for 3D virtual character motion synthesis ...
research
06/12/2021

Redirected Walking in Static and Dynamic Scenes Using Visibility Polygons

We present a new approach for redirected walking in static and dynamic s...
research
12/05/2020

iGibson, a Simulation Environment for Interactive Tasks in Large Realistic Scenes

We present iGibson, a novel simulation environment to develop robotic so...

Please sign up or login with your details

Forgot password? Click here to reset