ZeroEGGS: Zero-shot Example-based Gesture Generation from Speech

09/15/2022
by   Saeed Ghorbani, et al.
0

We present ZeroEGGS, a neural network framework for speech-driven gesture generation with zero-shot style control by example. This means style can be controlled via only a short example motion clip, even for motion styles unseen during training. Our model uses a Variational framework to learn a style embedding, making it easy to modify style through latent space manipulation or blending and scaling of style embeddings. The probabilistic nature of our framework further enables the generation of a variety of outputs given the same input, addressing the stochastic nature of gesture motion. In a series of experiments, we first demonstrate the flexibility and generalizability of our model to new speakers and styles. In a user study, we then show that our model outperforms previous state-of-the-art techniques in naturalness of motion, appropriateness for speech, and style portrayal. Finally, we release a high-quality dataset of full-body gesture motion including fingers, with speech, spanning across 19 different styles.

READ FULL TEXT
research
03/26/2023

GestureDiffuCLIP: Gesture Diffusion Model with CLIP Latents

The automatic generation of stylized co-speech gestures has recently rec...
research
11/17/2022

Listen, denoise, action! Audio-driven motion synthesis with diffusion models

Diffusion models have experienced a surge of interest as highly expressi...
research
03/05/2020

Learning to mirror speaking styles incrementally

Mirroring is the behavior in which one person subconsciously imitates th...
research
07/24/2020

Style Transfer for Co-Speech Gesture Animation: A Multi-Speaker Conditional-Mixture Approach

How can we teach robots or virtual assistants to gesture naturally? Can ...
research
09/01/2018

Cost Functions for Robot Motion Style

We focus on autonomously generating robot motion for day to day physical...
research
08/03/2022

Zero-Shot Style Transfer for Gesture Animation driven by Text and Speech using Adversarial Disentanglement of Multimodal Style Encoding

Modeling virtual agents with behavior style is one factor for personaliz...
research
09/18/2021

KNN Learning Techniques for Proportional Myocontrol in Prosthetics

This work has been conducted in the context of pattern-recognition-based...

Please sign up or login with your details

Forgot password? Click here to reset