Audio2Gestures: Generating Diverse Gestures from Speech Audio with Conditional Variational Autoencoders

08/15/2021
by   Jing Li, et al.
Harbin Institute of Technology
0

Generating conversational gestures from speech audio is challenging due to the inherent one-to-many mapping between audio and body motions. Conventional CNNs/RNNs assume one-to-one mapping, and thus tend to predict the average of all possible target motions, resulting in plain/boring motions during inference. In order to overcome this problem, we propose a novel conditional variational autoencoder (VAE) that explicitly models one-to-many audio-to-motion mapping by splitting the cross-modal latent code into shared code and motion-specific code. The shared code mainly models the strong correlation between audio and motion (such as the synchronized audio and motion beats), while the motion-specific code captures diverse motion information independent of the audio. However, splitting the latent code into two parts poses training difficulties for the VAE model. A mapping network facilitating random sampling along with other techniques including relaxed motion loss, bicycle constraint, and diversity loss are designed to better train the VAE. Experiments on both 3D and 2D motion datasets verify that our method generates more realistic and diverse motions than state-of-the-art methods, quantitatively and qualitatively. Finally, we demonstrate that our method can be readily used to generate motion sequences with user-specified motion clips on the timeline. Code and more results are at https://jingli513.github.io/audio2gestures.

READ FULL TEXT VIEW PDF

Authors

page 1

page 2

page 3

page 4

04/25/2022

TEMOS: Generating diverse human motions from textual descriptions

We address the problem of generating diverse 3D human motions from textu...
04/18/2022

Learning to Listen: Modeling Non-Deterministic Dyadic Facial Motion

We present a framework for modeling interactional communication in dyadi...
02/05/2021

Learning Audio-Visual Correlations from Variational Cross-Modal Generation

People can easily imagine the potential sound while seeing an event. Thi...
03/04/2022

Freeform Body Motion Generation from Speech

People naturally conduct spontaneous body motions to enhance their speec...
03/17/2022

MotionAug: Augmentation with Physical Correction for Human Motion Prediction

This paper presents a motion data augmentation scheme incorporating moti...
10/20/2020

Probabilistic Character Motion Synthesis using a Hierarchical Deep Latent Variable Model

We present a probabilistic framework to generate character animations ba...
07/23/2019

Multisensory Learning Framework for Robot Drumming

The hype about sensorimotor learning is currently reaching high fever, t...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

[trim=2.7cm 6cm 2cm 5.5cm,clip,width=1grid=false]img/teaser.pdf “Completely”Motion 1Motion 2Othermotions

Figure 1: Illustration of the existence of one-to-many mapping between audio and motion in Trinity dataset [9]. Different gestures are performed when the subject says “completely”.Similar phenomena broadly exist in co-speech gestures. The character used for demonstration is from Mixamo [32].

In the real world, co-speech gestures help express oneself better, and in the virtual world, it makes a talking avatar act more vividly. Attracted by these merits, there has been a growing demand for generating realistic human motions for given audio clips recently. This problem is very challenging because of the complicated one-to-many relationship between audio and motion. A speaker may act different gestures when speaking the same words due to different mental and physical states.

Existing algorithms developed for audio to body dynamics have some obvious limitations. For example, [11]

adapts a fully convolutional neural network to co-speech gesture synthesis tasks. Nevertheless, their model tends to predict averaged motion and thus generates motions lacking diversity. This is due to the underlying one-to-one mapping assumption of their model, which ignores that the relationship between speech and co-speech gesture is one-to-many in nature. Under such an overly simplified assumption, the model has no choice but to learn the averaged motion when several motions match almost the same audio clips in order to minimize the error. The above evidence inspires us to study whether or not explicitly modeling this multimodality improves the overall motion quality. To enhance the regression capability, we introduce an extra motion-specific latent code. With this varying

full latent code, which contains the same shared code and varying motion-specific code, the decoder can regress different motion targets well for the same audio, achieving one-to-many mapping results. Under this formulation, the shared code extracted from audio input serves as part of the control signal. The motion-specific code further modulates the audio-controlled motion, enabling multimodal motion generation.

Although this formulation is straightforward, it is not trivial to make it work as expected. Firstly, there exists an easy degenerated solution since the motion decoder could utilize only the motion-specific code to reconstruct the motion. Secondly, we need to generate the motion-specific code since we do not have access to the target motion during inference. Our solution to the aforementioned problems is providing random noise to the motion-specific code so that the decoder has to utilize the deterministic information contained in the shared code to reconstruct the target.

But under this circumstance, it is unsuitable for forcing the motion decoder to reconstruct the exact original target motion anymore. So a relaxed motion loss is proposed to apply to the motions generated with random motion-specific code. Specifically, it only penalizes the joints deviating from their targets larger than a threshold. This loss encourages the motion-specific code to tune the final motion while respecting the shared code’s control.

Our contributions can be summarized as:

  • We present a co-speech gesture generation model whose latent space is split into shared code and motion-specific code to better regress the training data and generate diverse motions.

  • We utilize random sampling and a relaxed motion loss to avoid degeneration of the proposed network and enable the model to generate multimodal motions.

  • The effectiveness of the proposed method has been verified on 3D and 2D gesture generation tasks by comparing it with several state-of-the-art methods.

  • The proposed method is suitable for motion synthesis from annotations since it can well respect the pre-defined actions in the timeline by simply using their corresponding motion-specific code.

2 Related Work

Audio to body dynamics.

Early methods generate human motion for specified audio input by blending motion clips chosen from a motion database according to hidden Markov model 

[27] or conditional random fields [26]. Algorithms focusing on selecting motion candidates from a pre-processed database usually cannot generate motions out of the database and does not scale to large databases.

Recently, deep generative models, such as VAEs [21] and GANs [13], have achieved great success in generating realistic images, as well as human motions [39, 16, 28]. For example, [36] utilizes a classic LSTM to predict the body movements of a person playing the piano or violin given the sound of the instruments. However, the body movements of a person playing the piano or violin show regular cyclic pattern and are usually constrained within a small pose space.

In contrast, generating co-speech gestures is more challenging in the following two aspects – the motion to generate is more complicated and the relationship between the speech and motion is more complicated. As a result, Speech2Gesture [11] proposes a more powerful fully convolutional network, consisting of a 8-layer CNN audio encoder and a 16-layer 1D U-Net decoder, to translate log-mel audio feature to gestures. And this network is trained with 14.4 hours of data per individual on average in comparison to 3 hours data in [36]. Other than greatly enlarged network capacity, this fully convolutional network better avoids the error accumulation problem often faced by RNN-based methods. However, it still suffers from predicting the averaged motion due to the existence of one-to-many mapping in the training data. The authors further introduce adversarial loss and notice that the loss helps to improve diversity but degenerates the realism of the outputs. In contrast, our method avoids learning the averaged motion by explicitly modeling the one-to-many mapping between audio and motion with the help of the extra motion-specific code.

Due to lack of 3D human pose data, the above deep learning based methods 

[36, 11]

have only tested 2D human pose data, which are 2D key point locations estimated from videos. Recently, 

[9] collects a 3D co-speech gesture dataset named Trinity Speech-Gesture Dataset, containing 244 minutes motion capture (MoCap) data with paired audio, and thus enables deep network-based study on modeling the correlation between audio and 3D motion. This dataset has been tested by StyleGestures [15], which is a flow-based algorithm [22, 15]. StyleGestures generates 3D gestures by sampling poses from a pose distribution predicted from previous motions and control signals. However, samples generated by flow-based methods [22, 15] are often not as good as VAEs and GANs. In contrast, our method learns the mapping between audio and motion with a customized VAE. Diverse results can be sampled since VAE is a probabilistic generation model.

Human motion prediction. There exist many works focus on predicting future motion given previous motion [16, 35, 39]. It is natural to model sequence data with RNNs [10, 18, 30, 39]. But [16]

has pointed out the RNN-based methods often suffer from error accumulation and thus are not good at predicting long-term human motion. So they proposes to use a fully convolutional generative adversarial network and achieves better performance at long-term human motion prediction. Similarly, we also adopt a fully convolutional neural network since we need to generate long-term human motion. Specific to 3D human motion prediction, another type of error accumulation happens along the kinematic chain 

[35] because any small joint rotation error propagates to all its descendant joints, e.g. hands and fingers, resulting in considerable position error especially for the end-effectors (wrists, fingers). So QuaterNet [35] optimizes the joint position which is calculated from forward kinematics when predicting long-term motion. Differently, we optimize the joint rotation and position losses at the same time to help the model learn the joint limitation at the same time.

Multimodal generation tasks. Generating data with multimodality has received increasing interests in various tasks, such as image generation [17, 42], motion generation [38, 43]. For image generation, MUNIT [17] disentangles the embedding of images into content feature and style feature. BicycleGAN [42] combined cVAE-GAN [25] and cLR-GAN [5, 8] to encourage the bijective consistency between the latent code and the output so that the model could generate different output by sampling different codes. For video generation, MoCoGAN [38] and S3VAE [43] disentangle the motion from the object to generate videos in which different objects perform similar motions. Different from [38, 43], our method disentangle the motion representation into the audio-motion shared information and motion-specific information to model the one-to-many mapping between audio and motion.

3 Preliminaries

In this section, we first briefly introduce the variational autoencoder (VAE) [21], which is a widely used generative model. Then we describe 3D motion data and the most commonly used motion losses.

3.1 Variational autoencoder

Compared to autoencoder, VAE additionally imposes constraints on the latent code to enable sampling outputs from the latent space. Specifically, during training, the distribution of the latent code is constrained to match a target distribution with KL divergence as follows:

(1)

where the represents the input of the corresponding encoder (audio or motion in our case), and represents its corresponding latent code. The above goal can be achieved by minimizing the Evidence Lower Bound (ELBO) [7]:

(2)

The second term of Eq. 2

is a KL-divergence between two Gaussian distributions (with a diagonal covariance matrix). The prior distribution

is set to Gaussian distribution (with a diagonal covariance matrix in our model, thus, the KL-divergence can be computed as:

(3)

where is the dimension of the distribution [7].

3.2 Motion reconstruction loss

In our method, the generated motion is supervised with motion reconstruction loss, consisting of rotation loss, position loss, and speed loss. Formally, it is defined as follows:

(4)

where , , are weights. We detail each term in the following.

Angular distance, i.e., geodesic distance, between the predicted rotation and the GT is adopted as the rotation loss. Mathematically,

(5)

Position loss is the distance between the predicted and target joint positions as follows:

(6)

Speed loss is introduced to help the model learn the complicated motion dynamics. In our work, the joint speed is defined as . We optimize the predicted and target joint speed as follows:

(7)

Our model can be trained with 2D motion data or 3D motion data. When modeling the 2D human motion, our method directly predicts the joint position. When modeling the 3D human motion, our method predicts the joint rotation and calculates the 3D joint positions with forward kinematics (FK). Concretely, the FK equation takes in as input the joint rotation matrix about its parent joint and the relative translation to its parent joint (i.e. bone length) and outputs joint positions as follows:

(8)

where represents the rotation matrix of joint in frame , represents the position of joint in frame , represents the relative translation of joint to its parent, and represents the parent joint index of the joint . We will always use and to index joints and frames in the following. Our model predicts joint rotation in 6D representation [41], which is a continuous representation that help the optimization of the model. The representation is then converted to rotation matrix by Gram-Schmidt-like process, where is the rotation matrix of joint in frame .

4 Audio2Gestures

The proposed Audio2Gestures algorithm is detailed in this section. We first present our Audio2Gestures network by formulating the multimodal motion generation problem in Sec. 4.1, then we detail the training process in Sec. 4.2.

[trim=3.5cm 6.6cm 5cm 3.2cm,clip,width=1grid=false]img/a2m_fig_jing.pdf Random SamplingCode RecombinationTraining/Inference data flowTraining-only data flow

Figure 2: Our method explicitly models the audio-motion mapping by splitting the latent code into shared and motion-specific codes. The decoder generates different motions by recombining the shared and motion-specific codes extracted from different sources. The data flow in blue is only used at the training stage because we do not have motion data during inference.

4.1 Network structure

We use a conditional encoder-decoder network to model the correlation between audio and motion , where represents the joint positions of frame . In Fig. 2, our proposed model is made up of an audio encoder , a motion encoder , a mapping net to produce motion-specific code during inference, and a common decoder to generate motions from latent codes. The latent code has been explicitly split into two parts (code and ) to account for the frequently occurred one-to-many mapping between the same (technically, very similar) audio and many different possible motions. The mapping network is introduced to facilitate sampling motion-specific codes. Under this formulation, given the same audio input (resulting in the same shared code ), varied motions produce different motion-specific code through motion encoder , resulting in different full latent codes () so that the network can better model the one-to-many mapping ().

During inference, shared feature is extracted with from the given audio . Motion-specific feature is generated with from a randomly sampled signal . Both and are fed into the decoder to produce the final motion , i.e .

During training, given a paired audio-motion data and , their features are firstly extracted by the encoders. Concretely, and . The decoder learns to reconstruct the input motion from the extracted features. To be specific, the decoder models the motion space by reconstructing the input motion by The model is expected to learn the joint embedding of audio and motion by guiding the decoder generate the same target motion from shared codes extracted from different source. But in practice, we notice the decoder will ignore the shared codes and reconstruct the motion only from . This is unwanted since the final motion is solely determined by the motion-specific features, being completely not correlated with the control signal (audio). Thus, another data flow () is introduced so that the decoder has to utilize the information contained in the shared code extracted from audio to reconstruct the target. The is generated from the mapping net

, whose input is a random signal from a Gaussian distribution. The mean and variance of the distribution is calculated from the

of the target motion per channel. We experimentally find using a mapping network is helpful to improve the realism of the generated motions, which is mainly caused by the mapping network helps align the sampled feature with the motion-specific feature.

4.2 Latent code learning

[width=0.8grid=false]img/overview_jing.pdf Diversity LossRelaxed Motion LossMotion LossBicycle ConstraintAlignment Constraint

Figure 3: The training details of our model. Our model is trained with alignment constraint, motion reconstruction losses, relaxed motion loss, bicycle constraint, diversity loss and KL divergence. The alignment constraints and motion reconstruction loss help the model learn the audio-motion joint embedding. The relaxed motion loss avoids the degeneration of the shared code. The bicycle constraints and the diversity loss help reduce the mode-collapse problem and guide the model to generate multimodal motions. The KL divergence is omitted in the figure for the sake of brevity.

To better learn the split audio-motion shared and motion-specific latent codes, five types of losses are introduced (Fig. 3). Alignment constraint and relaxed motion loss are introduced to learn the joint embedding (i.e., shared code) of the audio and motion. Bicycle constraints and diversity loss are introduced to model the multimodality of the motions. KL divergence has been described in Sec. 3 and thus omitted. The details are as follows.

Shared code alignment. The shared code of paired audio and motion is expected to be the same so that we can safely use audio-extracted shared code during inference and generate realistic and audio-related motions. We align the shared code of audio and motion by the alignment constraint:

(9)

Degeneration avoidance. As we described in Sec. 4.1, the model easily results in the degenerated network, which means the shared code is completely ignored and has no effect on the generated motion. Our solution to alleviating such degeneration is introducing an extra motion reconstruction with audio extracted shared code and random motion-specific code . Ideally, the generated motion resembles its GT from some aspects but is not the same as its GT. In our case, We assume the generated poses are similar in the 3D world space. Thus we propose relaxed motion loss, which calculates the position loss and penalizes the model only when the distance is larger than a certain threshold :

(10)

Motion-specific code alignment. Although the motion-specific code could be sampled from Gaussian distribution directly, we noticed that the realism and diversity of the generated motions are not good. The problem is caused by the misalignment of the Gaussian distribution and the motion-specific code distribution. Thus, the mapping net is introduced to map the signal sampled from Gaussian space to the motion-specific embedding. At the training stage, we calculate the mean and variance for every channel and every sample of the . The sampled features will be fed into a mapping network, which is also a variational autoencoder, before concatenating them with different shared codes to generate motions.

Motion-specific code reconstruction. Although the model could model the multimodal distribution of audio-motion pair by splitting the motion code into audio-motion shared one and motion-specific ones, it is not guaranteed the decoder can sample multimodal motions. For example, suppose the mapping net only maps the sampled signal to a single mode of the multimodal distribution. In that case, the decoder still could only generate unimodal motions, which is also known as the mode-collapse problem. The bicycle constraint [42] ( and ) is introduced to avoid the mode-collapse problem, which encourages a bijection between the motion and the motion-specific code. Since the motion reconstruction loss has already been introduced, an extra reconstruction loss of the motion-specific code is added as supplement:

(11)

Motion diversification. To further encourage multimodality of the generated motion, diversity loss [29, 6] is introduced. Maximizing the multimodality loss encourages the mapping network to explore the meaningful motion-specific code space. We follow the setting in [6] and directly maximize the joint position distance between two sampled motions since it is more stable than the original one [29]:

(12)

5 Experiments

In this section, we first introduce the datasets, evaluation metrics and implementation details separately in Sec. 

5.1-5.3. Then we show the performance of our algorithm and compare it with three state-of-the-art methods 5.4. Finally, we analyze the influence of each module of our model on the performance by ablation studies 5.5. More results are presented in our project page111https://jingli513.github.io/audio2gestures.

5.1 Datasets

Trinity dataset. Trinity Gesture Dataset [9] is a large-scale speech to gesture synthesis dataset. This dataset records a male native English speaker talking many different topics, such as movies and daily activities. The dataset contains 23 sequences of paired audio-motion data, 244 minutes in total. The audio of the dataset is recorded at 44kHz. The motion data, consisting of 56 joints, are recorded at 60 frame per second (FPS) or 120 FPS using Vicon motion capture system.

S2G-Ellen dataset. The S2G-Ellen dataset, which is a subset of the Speech2Gesture dataset [11], contains positions of 49 2D upper body joint estimated from 504 YouTube videos, including 406 training sequences (469513 frames), 46 validation sequences (46027 frames), and 52 test sequences (59922 frames). The joints, which is estimated using OpenPose [4], include neck, shoulders, elbows, wrists, and hands.

5.2 Evaluation metrics

5.2.1 Quantitative metrics

Realism. Following Ginosar et al.’s [11] suggestion, the distance of joint position in Eq. 13 and the percentage of correct 3D keypoints (PCK) in Eq. 14 are adopted to evaluate the realism of the generated motion. Specifically, distance is calculated by averaging the corresponding joint’s position error of all joints between prediction and GT :

(13)

The PCK metric calculates the percentage of correctly predicted keypoints, where a predicted keypoint is thought as correct if its distance to its target is smaller than a threshold :

(14)

where is the indicator function and indicates joint ’s position of frame . As in [11], the is set to 0.2 in our experiments.

Diversity. Diversity measures how many different poses/motions have been generated within a long motion. For example, RNN-based methods easily get stuck to some static motion as the generated motion becomes longer and longer. And static motions, which are undesired apparently, should get low diversity scores. We first split the generated motions into equal-lengthed non-overlapping motion clips (50 frames per clip in our experiments) and we calculate diversity as the averaged distance of the motion clips. Formally, it is defined as:

(15)

where the and represent clips from the same motion sequence, represents the count of the motion clips, which is in our experiments. Please note that jitter motion and invalid poses can also result in high diversity score. So higher diversity is preferred only if the generated motion is natural.

Multimodality. Multimodality measures how many different motions could be sampled (through multiple runs) for a given audio clip. Note that multimodality calculates motion difference across different motions while diversity calculates (short) motion clip difference within the same (long) motion. We measure the multimodality by generating motions for an audio times, which is 20 in our experiments, and then calculate the average distance of the motions.

(16)

where the and represent sampled motions generated through different runs for the given audio. Similar to diversity, invalid motion will also result in abnormally high multimodality score.

5.2.2 User studies

To evaluate the results qualitatively, we conduct user studies to analyze the visual quality of the generated motions. Our questionnaire contains four 20-second long videos. The motion clips shown in one video is generated by various methods from the same audio clip. The participants are asked to rate the motion clips from the following three aspects respectively:

  1. Realism: which one is more realistic?

  2. Diversity: which motion has more details?

  3. Matching degree: which motion matches the audio better?

The results of the questionnaires are shown in Fig. 4. We show the count of different ranking in the figure. The average score of different metrics for each algorithm is listed after the corresponding bar. The scores assigned to each ratings are {5,4,3,2,1} for {best, fine, not bad, bad, worst} respectively.

Figure 4: User study results comparing our method against the state-of-the-art methods. “S2G” is short for Speech2Gesture [11]. The horizontal axis represents the number of samples rated by the participants. In total, 160 comparisons have been rated (40 participants, 4 comparisons each questionnaire). The average score (higher is better) for each method is listed on the right. Bars with different colors indicate the count of the corresponding ranking of each algorithm. The video results are in our project page.
Dataset Method PCK Diverisity Multimodality
Trinity S2G w/o GAN [11] 7.71 0.82 5.99 -
S2G [11] 24.68 0.39 2.46 -
StyleGestures [2] 18.97 (18.07) 0.34 (0.34) 2.34 (3.79) 7.55
Ours 7.84 (7.65) 0.82 (0.83) 6.32 (6.52) 4.11
S2G-Ellen S2G w/o GAN [11] 0.74 0.37 0.61 -
S2G [11] 1.08 0.23 0.89 -
Ours 0.94 (0.92) 0.33 (0.34) 0.84 (0.85) 0.77
Table 1: Quantitative results on Trinity dataset and S2G-Ellen dataset. means the higher is better and means the lower is better. For methods supporting sampling, we run 20 tests and report their average score and the best score (in parentheses). Speech2Gesture (“S2G” in the table) could not generate multimodality motions.
Method PCK Diversity Multimodality
baseline 8.22 0.80 6.20 -
 + split 8.69 (8.30) 0.77 (0.78) 5.83 (6.02) 5.90
 + mapping net 8.06 (7.91) 0.80 (0.81) 5.86 (6.05) 3.44
 + bicycle constraint 7.94 (7.63) 0.80 (0.82) 6.31 (6.46) 3.68
 + diversity loss 7.84 (7.65) 0.82 (0.83) 6.32 (6.52) 4.11
Table 2: Ablation study results on the Trinity dataset. Note that every line adds a new component compared to its previous line. For methods supporting sampling, we run 20 tests and report their average score and the best score (in parentheses).

5.3 Implementation details

Data processing. We detail the data processing of Trinity dataset and S2G-Ellen dataset here.

(1) Trinity dataset. The audio data are resampled to 16kHz for extracting log-mel spectrogram [37] feature using librosa [31]. More concretely, the hop size is set to where SR is the sample rate of the audio and FR is the frame rate of the motion so that the resulting audio feature have the same length as the input motion. In our case, the resulting hop size is 533 since SR is 16000 and FR is 30. The dimension of the log-mel spectrogram is 64.

The motion data are downsampled to 30 FPS and then retargeted to the SMPL-X [34]

model. SMPL-X is an expressive articulated human model consisting of 54 joints (21 body joints, 30 hand joints, 3 face joints, respectively) , which has been widely used in 3D pose estimation and prediction 

[19, 34, 40, 24]. The joint rotation is in 6D rotation representation [41] in our experiments, which is a smooth representation and could help the model approximate the target easier. Note that the finger motions are removed due to unignorable noise.

(2) S2G-Ellen dataset. Following [11], the data are split into 64-frame long clips (4.2 seconds). Audio features are extracted in the same way as the Trinity dataset. The body joints are represented in a local coordinate frame relative to its root. Namely, the origin of the coordinate is the root joint.

Network. Every encoder, decoder and mapping net consists of four residual blocks [14]

, including 1D convolution and ReLU non-linearity 

[1]. The residual block is similar to [3] except several modifications. To be specific, the casual convolutions whose kernels see only the history are replaced with normal symmetric 1D convolutions seeing both the history and the future. Both the shared code and motion-specific code are set to 16 dimensions.

Training. At the training stage, we randomly crop a 4.2-second segment of the audio and motion data, which is 64 frames for the S2G dataset (15 FPS) and 128 frames for the Trinity dataset (30 FPS). The model weights are initialized with the Xavier method [12] and trained 180K steps using the Adam [20] optimizer. The batch size is 32 and the learning rate is . The , , are set as respectively, and is set as

in our experiments. Our model is implemented with PyTorch 

[33].

5.4 Comparison with state-of-the-art methods

We compare our method with two recent representative state-of-the-art methods, including one LSTM-based method named StyleGestures [2] and one CNN-based method named Speech2Gesture [11] on Trinity dataset. StyleGestures adapts normalizing flows [23, 22, 15] to speech-driven gesture synthesis. We train StyleGestures using the code released by the authors. The training data of the StyleGestures are processed in the same way as the authors indicate222The motions generated by StyleGestures are 20 FPS and have a different skeleton from our method. We upsample the predicted motion to 30 FPS and retarget it to SMPL-X skeleton with MotionBuilder.. Speech2Gesture, originally designed to map speech to 2D human keypoints, consists of an audio encoder and a motion decoder. Its final output layer has been adjusted to predict 3D joint rotations and is trained with the same losses as our method.

Quantitative experimental results are listed in Tab. 1 and user study results in Fig. 4. Both results show that our method outperforms previous state-of-the-art algorithms on the realism and diversity metrics, demonstrating that it is beneficial to explicitly model the one-to-many mapping between audio and motion in the network structure.

While StyleGestures supports generating different motions for the same audio by sampling, the quality of its generated motions is not very appealing. Also, its diversity score is the lowest, because LSTM output easily gets stuck into some poses, resulting in long static motion afterwards. The algorithm is not good at generating long-term sequences due to the error accumulation problem of the LSTM. The authors test their algorithm on 400 frames (13 seconds) length sequences. However, obviously deteriorated motions are generated when evaluating their algorithm to generate 5000-frame (166 seconds) long motions.

As for Speech2Gesture, the generated motions show similar realism with ours but obtain lower diversity score (Tab. 1) than our method. But Speech2Gesture does not support generating multimodal motions. Also note that Speech2Gesture with GAN generates many invalid poses and gets the worst performance. We have trained the model several times changing the learning rate range from 0.0001 to 0.01, and report the best performance here. The bad performance may be caused by the unstable of the training process of the generative adversarial network.

5.5 Ablation study

To gain more insights into the proposed components of our model, we test some variants of our model on the 3D Trinity dataset (Tab. 2). We run every variant 20 times and report the averaged performance and the best performance to avoid the influence of randomness. Note that the randomness of our model comes from two different parts, the randomness introduced by the variational autoencoder and by the motion-specific feature sampling.

We start with a “baseline” model, which excludes the mapping net and the split code. It is trained only with the motion reconstruction losses (Eq. 4) and shared code constraint (Eq. 9). The averaged scores “avg ”, “avg PCK” and “avg Diversity” of the model equal to the best scores “min ”, “max PCK” and “max Diversity” on , which indicates that the randomness of the VAE model have almost no affect on generating multimodal motions.

The next setting is termed as “+split”, which splits the output of the motion encoder into shared and motion-specific codes and introduces the relaxed motion loss (Eq. 10). This modification explicitly enables the network to handle the one-to-many mapping, but it harms the realism (see “avg ”, “min ”, “avg PCK” and “max PCK”) and diversity. As we can see, both the and the PCK metrics are worse than “baseline”. The abnormal results is mainly caused by the misalignment between the sampled signal and the motion-specific feature. We analyze the difference between the sampled signals with the motion-specific feature, and find that there is a big difference in their statistical characteristics, such as the mean and variance of derivatives.

Thus, a mapping network (“+mapping net”) is introduced to align the sampled signal with the motion-specific feature automatically. Although the multimodality drops compare to “+split”, this modification helps to improve other metrics of the generated motions a lot. Note that a higher multimodality score only makes sense when the generated motions is natural, as described in Sec. 5.2. The “+mapping net” model also outperforms the baseline model in the metrics and gets a similar PCK metrics, but the model could generate multimodal motions. We notice that the diversity of the motions generated by “+mapping net” model is not as good as the baseline model. The realism score of the “+mapping net” model is also worse than the baseline model, which may be due to the users prefer the motions with more dynamics. We think the problem may be caused by the mode collapse problem suffered by many generative methods.

To overcome this problem, two simple yet effective losses – Bicycle constraints and diversity loss – are introduced. Bicycle constraint improves the multimodality of the motions from 3.44 to 3.68. The avg diversity of the motions also increase from 5.86 to 6.31. The diversity loss further improves the motion diversity and multimodality but have little influence on the realism. The final model outperforms the baseline model in all quantitative indicators, which shows that the audio-motion mapping could be better modeled by explicit modeling the one-to-many correlation.

5.6 Application

We notice that motion-specific code extracted from a motion strongly controls the final motion output. To be specific, the synthesized motion is almost the same as the original motion used to extract this motion-specific code. This feature is perfect for a type of motion synthesis application where pre-defined motions are provided on the timeline as constraints. For example, if there is a -frame long motion clip that we want the avatar to perform from frame to . We could extract its motion-specific code with the motion encoder and directly replace the sampled motion-specific code from to . Our model could generate a smooth motion from the edited motion-specific code. Please refer to our project page for the demonstration.

6 Conclusion

In this paper, we explicitly model the one-to-many mapping by splitting the latent code into shared code and motion-specific code. This simple solution with our customized training strategy effectively improves the realism, diversity, and multimodality of the generated motion. We also demonstrate an application that the model could insert a specific motion into the generated motion by editing the motion-specific code, with smooth and realistic transitions. Despite the model could generate multimodal motions and provide users the ability to control the output motion, there exist some limitations. For example, the generated motion is not very related to what the person says, future work could be improving the meaning of the generated motion by incorporating word embedding as an additional condition.

7 Acknowledgement

This work was supported by the Natural Science Foundation of China (U2013210, 62006060), the Shenzhen Research Council (JCYJ20210324120202006), the Shenzhen Stable Support Plan Fund for Universities (GXWD20201230155427003-20200824125730001), and the Special Research project on COVID-19 Prevention and Control of Guangdong Province (2020KZDZDX1227).

References

  • [1] A. F. Agarap (2018)

    Deep learning using rectified linear units (relu)

    .
    arXiv preprint arXiv:1803.08375. Cited by: §5.3.
  • [2] S. Alexanderson, G. E. Henter, T. Kucherenko, and J. Beskow (2020) Style-controllable speech-driven gesture synthesis using normalising flows. Comput. Graph. Forum 39 (2), pp. 487–496. External Links: Link, Document Cited by: §5.4, Table 1.
  • [3] S. Bai, J. Z. Kolter, and V. Koltun (2018) An empirical evaluation of generic convolutional and recurrent networks for sequence modeling. arXiv preprint arXiv:1803.01271. Cited by: §5.3.
  • [4] Z. Cao, T. Simon, S. Wei, and Y. Sheikh (2017) Realtime multi-person 2d pose estimation using part affinity fields. In IEEE Conf. Comput. Vis. Pattern Recog., pp. 7291–7299. Cited by: §5.1.
  • [5] X. Chen, Y. Duan, R. Houthooft, J. Schulman, I. Sutskever, and P. Abbeel (2016) Infogan: interpretable representation learning by information maximizing generative adversarial nets. In Adv. Neural Inform. Process. Syst., pp. 2172–2180. Cited by: §2.
  • [6] Y. Choi, Y. Uh, J. Yoo, and J. Ha (2020-06) StarGAN v2: diverse image synthesis for multiple domains. In IEEE Conf. Comput. Vis. Pattern Recog., Cited by: §4.2.
  • [7] C. DOERSCH (2016) Tutorial on variational autoencoders. stat 1050, pp. 13. Cited by: §3.1, §3.1.
  • [8] J. Donahue, P. Krähenbühl, and T. Darrell (2016) Adversarial feature learning. arXiv preprint arXiv:1605.09782. Cited by: §2.
  • [9] Y. Ferstl and R. McDonnell (2018-11) IVA: investigating the use of recurrent motion modelling for speech gesture generation. In IVA ’18 Proceedings of the 18th International Conference on Intelligent Virtual Agents, External Links: Link Cited by: Figure 1, §2, §5.1.
  • [10] K. Fragkiadaki, S. Levine, P. Felsen, and J. Malik (2015) Recurrent network models for human dynamics. In Int. Conf. Comput. Vis., pp. 4346–4354. Cited by: §2.
  • [11] S. Ginosar, A. Bar, G. Kohavi, C. Chan, A. Owens, and J. Malik (2019-06) Learning individual styles of conversational gesture. In IEEE Conf. Comput. Vis. Pattern Recog., Cited by: §1, §2, §2, Figure 4, §5.1, §5.2.1, §5.3, §5.4, Table 1.
  • [12] X. Glorot and Y. Bengio (2010) Understanding the difficulty of training deep feedforward neural networks. In

    Proceedings of the thirteenth international conference on artificial intelligence and statistics

    ,
    pp. 249–256. Cited by: §5.3.
  • [13] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio (2014) Generative adversarial nets. In Adv. Neural Inform. Process. Syst., pp. 2672–2680. Cited by: §2.
  • [14] K. He, X. Zhang, S. Ren, and J. Sun (2016) Deep residual learning for image recognition. In IEEE Conf. Comput. Vis. Pattern Recog., pp. 770–778. Cited by: §5.3.
  • [15] G. E. Henter, S. Alexanderson, and J. Beskow (2020-11) MoGlow: probabilistic and controllable motion synthesis using normalising flows. ACM Trans. Graph. 39 (6). External Links: ISSN 0730-0301, Link, Document Cited by: §2, §5.4.
  • [16] A. Hernandez, J. Gall, and F. Moreno-Noguer (2019) Human motion prediction via spatio-temporal inpainting. In Int. Conf. Comput. Vis., pp. 7134–7143. Cited by: §2, §2.
  • [17] X. Huang, M. Liu, S. Belongie, and J. Kautz (2018)

    Multimodal unsupervised image-to-image translation

    .
    In Eur. Conf. Comput. Vis., pp. 172–189. Cited by: §2.
  • [18] A. Jain, A. R. Zamir, S. Savarese, and A. Saxena (2016) Structural-rnn: deep learning on spatio-temporal graphs. In

    Proceedings of the ieee conference on computer vision and pattern recognition

    ,
    pp. 5308–5317. Cited by: §2.
  • [19] A. Kanazawa, M. J. Black, D. W. Jacobs, and J. Malik (2018) End-to-End Recovery of Human Shape and Pose. Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 7122–7131. External Links: Document, 1712.06584, ISBN 9781538664209, ISSN 10636919 Cited by: §5.3.
  • [20] D. P. Kingma and J. Ba (2014) Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980. Cited by: §5.3.
  • [21] D. P. Kingma and M. Welling (2014) Auto-encoding variational bayes. stat 1050, pp. 1. Cited by: §2, §3.
  • [22] D. P. Kingma and P. Dhariwal (2018) Glow: generative flow with invertible 1x1 convolutions. In Adv. Neural Inform. Process. Syst., pp. 10215–10224. Cited by: §2, §5.4.
  • [23] I. Kobyzev, S. Prince, and M. Brubaker (2020) Normalizing flows: an introduction and review of current methods. IEEE Transactions on Pattern Analysis and Machine Intelligence. Cited by: §5.4.
  • [24] N. Kolotouros, G. Pavlakos, M. Black, and K. Daniilidis (2019) Learning to reconstruct 3D human pose and shape via model-fitting in the loop. In Proceedings of the IEEE International Conference on Computer Vision, Vol. 2019-Octob, pp. 2252–2261. External Links: Document, 1909.12828, ISBN 9781728148038, ISSN 15505499, Link Cited by: §5.3.
  • [25] A. B. L. Larsen, S. K. Sønderby, H. Larochelle, and O. Winther (2016) Autoencoding beyond pixels using a learned similarity metric. In

    Proceedings of the 33rd International Conference on International Conference on Machine Learning - Volume 48

    ,
    ICML’16, pp. 1558–1566. Cited by: §2.
  • [26] S. Levine, P. Krähenbühl, S. Thrun, and V. Koltun (2010) Gesture controllers. In ACM SIGGRAPH 2010 papers, pp. 1–11. Cited by: §2.
  • [27] S. Levine, C. Theobalt, and V. Koltun (2009-12) Real-Time Prosody-Driven Synthesis of Body Language. ACM Trans. Graph. 28 (5), pp. 1–10. External Links: Document, ISSN 15577368, Link Cited by: §2.
  • [28] H. Y. Ling, F. Zinno, G. Cheng, and M. Van De Panne (2020-07) Character controllers using motion vaes. ACM Trans. Graph. 39 (4). External Links: ISSN 0730-0301, Link, Document Cited by: §2.
  • [29] Q. Mao, H. Lee, H. Tseng, S. Ma, and M. Yang (2019) Mode seeking generative adversarial networks for diverse image synthesis. In IEEE Conference on Computer Vision and Pattern Recognition, Cited by: §4.2.
  • [30] J. Martinez, M. J. Black, and J. Romero (2017)

    On human motion prediction using recurrent neural networks

    .
    In IEEE Conf. Comput. Vis. Pattern Recog., pp. 2891–2900. Cited by: §2.
  • [31] Librosa/librosa: 0.8.0 External Links: Document, Link Cited by: §5.3.
  • [32] Mixamo. Note: https://www.mixamo.com[Online; accessed 15-March-2021] Cited by: Figure 1.
  • [33] A. Paszke, S. Gross, S. Chintala, G. Chanan, E. Yang, Z. DeVito, Z. Lin, A. Desmaison, L. Antiga, and A. Lerer (2017) Automatic differentiation in pytorch. Cited by: §5.3.
  • [34] G. Pavlakos, V. Choutas, N. Ghorbani, T. Bolkart, A. A. A. Osman, D. Tzionas, and M. J. Black (2019) Expressive body capture: 3d hands, face, and body from a single image. In IEEE Conf. Comput. Vis. Pattern Recog., Cited by: §5.3.
  • [35] D. Pavllo, D. Grangier, and M. Auli (2018) QuaterNet: a quaternion-based recurrent model for human motion. In Brit. Mach. Vis. Conf., Cited by: §2.
  • [36] E. Shlizerman, L. Dery, H. Schoen, and I. Kemelmacher-Shlizerman (2018) Audio to body dynamics. In IEEE Conf. Comput. Vis. Pattern Recog., pp. 7574–7583. Cited by: §2, §2, §2.
  • [37] S. S. Stevens, J. Volkmann, and E. B. Newman (1937) A scale for the measurement of the psychological magnitude pitch. The Journal of the Acoustical Society of America 8 (3), pp. 185–190. Cited by: §5.3.
  • [38] S. Tulyakov, M. Liu, X. Yang, and J. Kautz (2018) Mocogan: decomposing motion and content for video generation. In IEEE Conf. Comput. Vis. Pattern Recog., pp. 1526–1535. Cited by: §2.
  • [39] X. Yan, A. Rastogi, R. Villegas, K. Sunkavalli, E. Shechtman, S. Hadap, E. Yumer, and H. Lee (2018) Mt-vae: learning motion transformations to generate multimodal human dynamics. In Eur. Conf. Comput. Vis., pp. 265–281. Cited by: §2, §2.
  • [40] J. Zhang, P. Felsen, A. Kanazawa, and J. Malik (2019) Predicting 3D human dynamics from video. In Proceedings of the IEEE International Conference on Computer Vision, Vol. 2019-Octob, pp. 7113–7122. External Links: Document, 1908.04781, ISBN 9781728148038, ISSN 15505499, Link Cited by: §5.3.
  • [41] Y. Zhou, C. Barnes, L. Jingwan, Y. Jimei, and L. Hao (2019-06) On the continuity of rotation representations in neural networks. In IEEE Conf. Comput. Vis. Pattern Recog., Cited by: §3.2, §5.3.
  • [42] J. Zhu, R. Zhang, D. Pathak, T. Darrell, A. A. Efros, O. Wang, and E. Shechtman (2017) Toward multimodal image-to-image translation. In Advances in Neural Information Processing Systems 30, I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett (Eds.), pp. 465–476. External Links: Link Cited by: §2, §4.2.
  • [43] Y. Zhu, M. R. Min, A. Kadav, and H. P. Graf (2020) S3VAE: self-supervised sequential vae for representation disentanglement and data generation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 6538–6547. Cited by: §2.