Neural Field Movement Primitives for Joint Modelling of Scenes and Motions

08/09/2023
by   Ahmet Tekden, et al.
0

This paper presents a novel Learning from Demonstration (LfD) method that uses neural fields to learn new skills efficiently and accurately. It achieves this by utilizing a shared embedding to learn both scene and motion representations in a generative way. Our method smoothly maps each expert demonstration to a scene-motion embedding and learns to model them without requiring hand-crafted task parameters or large datasets. It achieves data efficiency by enforcing scene and motion generation to be smooth with respect to changes in the embedding space. At inference time, our method can retrieve scene-motion embeddings using test time optimization, and generate precise motion trajectories for novel scenes. The proposed method is versatile and can employ images, 3D shapes, and any other scene representations that can be modeled using neural fields. Additionally, it can generate both end-effector positions and joint angle-based trajectories. Our method is evaluated on tasks that require accurate motion trajectory generation, where the underlying task parametrization is based on object positions and geometric scene changes. Experimental results demonstrate that the proposed method outperforms the baseline approaches and generalizes to novel scenes. Furthermore, in real-world experiments, we show that our method can successfully model multi-valued trajectories, it is robust to the distractor objects introduced at inference time, and it can generate 6D motions.

READ FULL TEXT

page 1

page 3

page 5

page 6

page 7

research
10/18/2022

HUMANISE: Language-conditioned Human Motion Generation in 3D Scenes

Learning to generate diverse scene-aware and goal-oriented human motions...
research
05/31/2021

Scene-aware Generative Network for Human Motion Synthesis

We revisit human motion synthesis, a task useful in various real world a...
research
08/07/2023

3D Motion Magnification: Visualizing Subtle Motions with Time Varying Radiance Fields

Motion magnification helps us visualize subtle, imperceptible motion. Ho...
research
09/14/2022

TEAM: a parameter-free algorithm to teach collaborative robots motions from user demonstrations

Collaborative robots (cobots) built to work alongside humans must be abl...
research
05/25/2023

PRIMP: PRobabilistically-Informed Motion Primitives for Efficient Affordance Learning from Demonstration

This paper proposes a learning-from-demonstration method using probabili...
research
10/06/2019

Neural Multisensory Scene Inference

For embodied agents to infer representations of the underlying 3D physic...
research
02/24/2021

Learning to Shift Attention for Motion Generation

One challenge of motion generation using robot learning from demonstrati...

Please sign up or login with your details

Forgot password? Click here to reset