Towards 3D Dance Motion Synthesis and Control

3D human dance motion is a cooperative and elegant social movement. Unlike regular simple locomotion, it is challenging to synthesize artistic dance motions due to the irregularity, kinematic complexity and diversity. It requires the synthesized dance is realistic, diverse and controllable. In this paper, we propose a novel generative motion model based on temporal convolution and LSTM,TC-LSTM, to synthesize realistic and diverse dance motion. We introduce a unique control signal, dance melody line, to heighten controllability. Hence, our model, and its switch for control signals, promote a variety of applications: random dance synthesis, music-to-dance, user control, and more. Our experiments demonstrate that our model can synthesize artistic dance motion in various dance types. Compared with existing methods, our method achieved start-of-the-art results.

READ FULL TEXT VIEW PDF

Authors

page 6

page 7

page 8

02/02/2020

Music2Dance: DanceNet for Music-driven Dance Generation

Synthesize human motions from music, i.e., music to dance, is appealing ...
01/28/2021

A Causal Convolutional Neural Network for Motion Modeling and Synthesis

We propose a novel deep generative model based on causal convolutions fo...
10/20/2020

Probabilistic Character Motion Synthesis using a Hierarchical Deep Latent Variable Model

We present a probabilistic framework to generate character animations ba...
12/16/2021

Object-based synthesis of scraping and rolling sounds based on non-linear physical constraints

Sustained contact interactions like scraping and rolling produce a wide ...
05/16/2019

MoGlow: Probabilistic and controllable motion synthesis using normalising flows

Data-driven modelling and synthesis of motion data is an active research...
04/07/2021

Graph-based Normalizing Flow for Human Motion Generation and Reconstruction

Data-driven approaches for modeling human skeletal motion have found var...
05/25/2022

Towards Diverse and Natural Scene-aware 3D Human Motion Synthesis

The ability to synthesize long-term human motion sequences in real-world...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1. Introduction

Dance and concepts thereof are embroidered in our society, culture, and history (Boyd, 2004)– whether it is freestyle (i.e., on-the-fly), a specific dance to a certain song (e.g., the macarena

), a spiritually or culturally inspired dance-off, or even just solo acts of dancing while alone and replaying a melody from memory. Hence, dance moves have the power to allow one-to-many individuals express emotions, all the while having the persistence to inspire, spread knowledge, show culture, and promote believes. As a part of the mobile-age, dance performances are easily main-streamed, making it possible to broadcast or view from just about anywhere at given moment. Now, we aim to best leverage our scientific research to directly enhance the melody of life we call dancing.

Dancing can be considered a form of art. It requires professional choreographers to create and design artistic movements to express emotions. For this, professional dancers are trained and equipped with a rich repertoire of dance steps - the more creative, the better. Different dancers perform quite differently, even to the same music or melody. Nonprofessionals would typically find it challenging to create a dance. Therefore, to acquire the ability to automatically create dances is a daunting task, as dancing contains high kinematic complexity that span long-term spatio-temporal structures (i.e., temporal 3D human dance motion), which make it difficult to synthesize realistic dance. More importantly, dance motion is diverse, irregular, complex, and often designed for specific music or melody. In addition, it is important to note that dancing is inherently a multi-modal problem (Lee et al., 2019), spanning multiple views (i.e., various dances for the same song). Lastly, different music or melodies should yield a whole variety of dance types. The challenges and specifications mentioned here demand an effective generative model to handle the complex and diverse dance motions. Furthermore, with such high powerful model, it should be adequate for various applications: freestyle(random synthesis (Li et al., 2017)), dancing with music(music2dance (Zhuang et al., 2020)), and support ordinary users to create dance(user control).

Early on, researches mainly adopted similarity-based retrieval methods to synthesize simple locomotion (Li et al., 2002; Min and Chai, 2012; Safonova and Hodgins, 2007). Then, others proposed methods to synthesize long-term dance motions in accordance to musical inputs (Fan et al., 2011; Ofli et al., 2011; Lee et al., 2013)

. However, these strategies lack flexibility, creativity, and are difficult to apply to irregular, complex dance motion. More recently, deep learning-based motion synthesis algorithms have shown higher potential 

(Fragkiadaki et al., 2015; Martinez et al., 2017; Li et al., 2017; Peng et al., 2017, 2018). The deep nn can well model spatio-temporal structures with high kinematic complexity without taking up too much memory. rnn (Fragkiadaki et al., 2015; Martinez et al., 2017) has been proposed in recent years to model human motions for human motion prediction. However, these methods can easily fall into the temporal accumulation error(i.e., get stuck in static poses). Li et al. proposed the auto-condition training strategy to train the model based on rnn to synthesize complex dance motion (Li et al., 2017). The method can only achieve random motion synthesis(long-term motion prediction), but not controllable motion synthesis. Considering the control signal, the method based on lstm (Lee et al., 2018) can synthesize controllable simple locomotion to interact with the environment, and it is difficult to synthesize complex controllable dance motion. Recently, some learning-based methods (Tang et al., 2018; Lee et al., 2019; Zhuang et al., 2020) have been used to synthesize controllable dance motion from music. Lee et al. designed a nn based on vae and gan to synthesize 2D dance movement (Lee et al., 2019). The lstm-ae (Tang et al., 2018) is proposed to synthesize 3D dance motion from music feature, but the synthesized dance motion is far from realistic and diverse.The DanceNet, based on tcn, have been applied to synthesize diverse dances for different dance types (Zhuang et al., 2020). However, the method can not synthesize various dances for the same music. Different from the direction and speed control in the locomotion synthesis (Lee et al., 2018), the control signal adopted by the method is directly extracted from the music. Although there is the rhythmic consistency between music and dance, it is difficult to completely determine the dance motion. This is a weak (i.e., not strong) control signal, so the controllability of their synthesized dance motion is lacking. Moreover, existing methods do not span various applications with the same model.

We first propose a dance control signal. Unlike in (Zhuang et al., 2020), we introduce a control signal, called dance melody line, and it is highly correlated to the dance since it is extracted directly from the dance motion. We sum up the speed of the salient joints frame-by-frame to capture the melody control signal (i.e., a 1D control signal). Provided a low-dimensional signal produced directly by motion, its coupling is notably strong, i.e., different dance motions correspond to the same dance melody line. This helps to synthesize different dance motions conditioned with the same control signal.

We also introduce a novel generative model with encoding and decoding stages. tcn are robust to noisy inputs (Zhuang et al., 2020), so we adopt it to extract motion features and fuse control signals to obtain controllable motion features, which is done as part of the the encoding stage. To strengthen the long-term spatio-temporal dependence of the output frames, we adopt the lstm as the decoder. Our model overcomes the shortcomings of the lstm that is not robust to noise, while ensuring that the output frames leverage long sequence dependency. Our model obtains the controllable motion features based on the tcn, and then lstm decode to synthesize controllable dance motion. As a whole, we call our framework the tclstm. The output of tclstm is designed as a pdf (i.e

., Gaussian mixture model), which also makes our model more robust. With a careful training strategy (

i.e., mix training), our model supports switch melody control signal to synthesize dance motion, meaning that our model can use the same parameters for random and controllable motion synthesis.

Method Model extensibility Random synthesis Music2Dance User control
ac-lstm(Li et al., 2017)
lstm-ae(Tang et al., 2018)
DanceNet(Zhuang et al., 2020)
Ours

Table 1. Compare against sota methods about 3D dance synthesis for different applications. Model extensibility means the model can synthesis dance for different dance types. means the method can achieve the application. means the method can not achieve the application.

Applications. We ran experiments on music-dance pair dataset
 (Zhuang et al., 2020). The results show that our model can generate realistic and diverse dance motions for different applications. Listed as follows:

Random synthesis. We can switch off the melody control signal, and our model synthesizes long-term (i.e., arbitrary length), diverse dance sequences(without the temporal accumulation error).

Music2Dance. From the analysis of the music-dance paired data (Zhuang et al., 2020)

, we found that the melody lines of music and dance are highly matched, so we directly use the music melody line as the melody control signal. The type control signal can be obtained by a classifier as in 

(Zhuang et al., 2020) or user given. Through the music melody line and dance type to synthesize the dance motion consistent with the melody and style of music.

User control. The dance melody line is a 1D signal, which is easily given by ordinary users. Therefore, our approach allows ordinary users to design dance motions. We can synthesize the controllable dance through the user-defined melody line(i.e., drawing) as the melody control signal.

Research contributions. Along with the direct, tangible benefits, we propose the following contributions in research:

Controllability. To our knowledge, we are the first to propose a controllable dance synthesis framework.

Robust to noisy inputs and long-term dependencies. Our encoder-decoder structure ensures robustness to noise,and building long-term spatio-temporal dependence of the output frames.

sota results with various applications. Our experiments show that our approach can achieve sota results for different applications.

2. Background

We review three research area most related to the proposed (i.e., motion synthesis and control, dance motion, and generative model).

Motion synthesis and control. Researchers tend to synthesize motions via data-driven methods, i.e., , hmm (Bowden, 2000; Brand and Hertzmann, 2000), spatial-temporal dynamic models (Chai and Hodgins, 2007; Wei et al., 2011; Lau et al., 2009; Xia et al., 2015), and low-dimensional statistical models (Chai and Hodgins, 2005; Grochow et al., 2004). In addition, other methods to synthesize locomotion were based on motion graphs (Li et al., 2002; Lee et al., 2002; Kovar et al., 2008; Min and Chai, 2012; Safonova and Hodgins, 2007). A common strategy among the aforementioned methods is the formulation of similarity-based retrieval to synthesize simple locomotion, which are completely dependent on the availability of dataset, hence, lacks flexibility and tend not to generalize well. Nowadays, deep nn-based methods (Fragkiadaki et al., 2015; Holden et al., 2016) gradually started being used to synthesize motion. For instance, the rnn-based methods that were proposed to predict short-term human motion, while being unable to synthesize long-term motion due to the temporal accumulation error (Fragkiadaki et al., 2015; Jain et al., 2016; Martinez et al., 2017). Li et al. adopted the auto-conditioned training strategy to synthesize long-term motion, but lacking the ability to control the generated motion (Li et al., 2017). Phase-functional networks (Holden et al., 2017) and lstm-based method (Lee et al., 2018) were introduced to synthesize controllable locomotion. However, the these methods are still limited - either by simple-random or simple-controlled locomotion, thus they are unable to synthesize complex dance motion with complete control. These limitations can be overcome in our approach. Our method can synthesize realistic, complex, diverse and controllable dance motion sequences.

Dance motion. As mentioned, earlier research tended to focus on synthesizing dance motions by adopting similarity retrieval strategies (e.g., motion graph (Li et al., 2002; Lee et al., 2002)). Fan et al. divided the long-term dance motion into multiple short-term clips, which were then used to build a motion graph (Fan et al., 2011). Shiratori et al. retrieved each dance segment where the music and dance rhythm were consistent (Shiratori et al., 2006). However, these methods rely entirely on the dataset, and lack the ability to truly creativity are music-consistency. Recently, various types of nn emerged as solutions to generate dance movements. For instance, vae and gan models were proposed to synthesize 2D dance motion from music (Lee et al., 2019). Tang et al. built an lstm-ae to generate 3D dance motions; however, the generated motions are unrealistic (Tang et al., 2018). Li et al. proposed auto-conditioned lstm to synthesize 3D dance motion; however, this work lacked motion-control(just random synthesis) (Li et al., 2017). A model based on temporal convolution was proposed to generate 3D, controllable dance motions, but the controllability is limited in its inability to synthesize multimodal dances provided the same control signals (Zhuang et al., 2020). Our model overcomes the limitations in being controllable, as we are able to synthesize realistic, diverse, and controllable multimodal dances. Furthermore, we use just a single model in a wide-range of applications: random synthesis, music2dance, and user control (Fig 1).

Generative model

. In the motion synthesis model, the common autoregressive model is an lstm 

(Fragkiadaki et al., 2015; Jain et al., 2016; Martinez et al., 2017; Li et al., 2017). Like most others, these models can only generate random outputs, and lack the ability to control the synthesized motion. Lee et al. designed the control signal at the model input, but lacked robustness to noise (Lee et al., 2018). Zhuang et al. proposed an autoregressive model based on temporal convolution (Zhuang et al., 2020), although the model is robust to input noise, it was unable to capture temporal dependencies across output frames. We carefully consider the pros and cons of the temporal convolution and lstm-based methods - leveraging the strengths of each to design our model. During the encoding phase, we train our model to be insensitive to input noise via temporal convolution to encode features. Thereafter, we employ an lstm to decode, as it enhances the temporal correlation of the output motion so that our method synthesizes realistic, complex, diverse and controllable dance motions.

3. overview

The proposed framework is shown in Figure 1. We take 3D dance synthesis as an autoregressive process conditioned on control signals. specifically, to synthesize current frame, we take the previous dance frames and control signals(dance melody line and dance type) as inputs. We will introduce dance motion and control signal processing in Section 4. The model consists of two parts: encoder based on temporal convolution and decoder based on LSTMs. We will elaborate on our model in Section 5. After decoding, the model outputs the pdf of current dance frame, then we sample from the pdf to get the current frame. Our model can realize different applications with a set of parameters: dance random synthesis, Music2Dance, and user control, which are introduced in Section 6.

4. Data Processing

Zhuang et al. introduced a high-quality music-dance pair dataset to synthesize dance-from-music (Zhuang et al., 2020). This dataset consists of two types of dances - modern dance (26.15 minutes, 94,155 frames at 60 FPS) and Korean dance (31.72 minutes, 114,192 frames at 60 FPS) - we used to train the proposed model. The aim of this work was to generate controllable dance motions with the control signal made-up of two parts, i.e

., dance melody line (characterize the dance rhythm, local condition), and dance type(characterize the dance style, global condition). The dance type can be represented as an one-hot vector

, similar to  (Zhuang et al., 2020). We next describe the dance melody line (Section 4.1), and then the dance motion representation (Section4.2).

Figure 2. Depiction of the melody line from a modern dance. We proposed quantifying the melody lines as 1D signals.

4.1. Dance melody line

Professional choreographers taught us that human dance moves are more than just random movements (Hewitt, 2005). There is an internal melody for dance founded on higher-level information that reflects the rhythm and speed of a dance, which mirrors that of the main melody line in music theory (Mason, 2012). For this, we introduce a kinematic melody line extractor to encode speed information of the motion. Theoretically, we can either use the angular velocities or position velocities to encode the speed of a dance. In practice, we chose to use joint translation velocities for the purpose of motion control, similar to (Kim et al., 2003). Inspired by (Fan et al., 2011), we extract the speed of the motion for just a few key joints, opposed to all of them. Specifically, we focused on the left and right shoulders, elbows, hands, knees, feet, and the head. From these joints, we can express the salient kinematic movement, along with the importance of the positions in motion features (Lee et al., 2002). The speed of motion is determined by the change in position between neighboring poses. Specifically, the speed of the motion at frame is the sum of speed of each key joint . Mathematically speaking,

(1)

where the is the position of joint at frame . To ensure smoothness, we use a Gaussian filter to smooth the motion speed to get the motion melody line . Thus, (i.e., the motion speed) is a strong signal (Fig. 2). Meanwhile, the signal is highly coupled due to summing of speeds– different motions may have the same melody line.

Since dance focuses on melody changes, we use the melody change trend as the control signal instead of directly using the melody line. Specifically, we extract the melody line relative to the value of current frame in a one second interval, and use it as the melody control signal. We represent the melody control signal by sparsely sampling in the temporal domain. Hence, starting from frame of the melody line clip, a one second sequence of future frames is extracted (i.e., 60 frames from our 60 FPS dance motion). Then, we down-sampled in the temporal domain to 12-1D points. Finally, we subtract the speed of frame to produce the change in speed and, thus, the melody control signal,

(2)

where is the sampling interval and is the sampling index.

4.2. Motion representation

Human motion is modeled as an articulated figure with rigid links (i.e., socket joints) connected ball-to-ball. Each frame is represented by the root translation (), rotation (), and the other joint rotations with respect to the parents (i.e., , and is the joint index). However, such a motion representation is a relative feature with local information related their parents. In addition, the translation and rotation of the root joint is relative to world coordinates. Ultimately, this increases the motion feature space, which increases the modelling complexity. Thus, we adopt the relative translation and rotation of root joint and add 3D joint positions, angle, and position velocities to the representation. This allows us to better model human motions. In our method, the motion feature at frame is represented as the rotations using quaternion exponential mapping  (Grassia, 1998), the angular velocities , the 3D joint positions , the joint linear velocities , and foot contact information . For the rotation of the root joint, we use the relative rotation of the current and previous frames (i.e., the rotation about the Y-axis), and the and translations of the root joint are defined on the local coordinates of the previous frame (), like in (Holden et al., 2016). In summary,

(3)
(4)
(5)

We extract motion features from two aspects (i.e., rotation and position) to maximize the amount of motion information. Then, we add the angular velocity and linear velocity to more fully represent the motion feature. In addition, the information about the foot contact is added to reduce the foot sliding in the generated frames. Like in (Holden et al., 2017; Zhuang et al., 2020), the foot contact labels(ground-truth) are detected by the height and speed of per frame.

5. Generative Model

Next, we introduce the structure of the proposed model, and then we discuss the training details.

5.1. Encoder-decoder structure

Our end-to-end framework is shown in Figure 1: the proposed models the pdf of the predicted motion conditioned on control signals as

(6)
(7)

where is the motion of the previous frames and is our model made-up of encoder and decoder .

Encoder. Inspired by (Zhuang et al., 2020), we adopt temporal dilated convolution to extract input features for improved robustness to noise. The motion controllable feature is extracted during encoding via

(8)

Due to the high complexity of motion data, we first encode the input with a two stacked

Conv1D+Relu

module . Then, the residual control module fuses motion coded features and control signals similar to (Zhuang et al., 2020). We stack 10- and sum the outputs to produce motion controllable feature . Furthermore, the dilated convolution in can extract the temporal fusion information of the motion sequence by increasing the receptive field. Also, accepts the control signals as input to the 1D-conv layer with a kernel size of 1. Then, the fused temporal motion feature is fused with the control features by summing. However, the coupling formed via addition lacks in strength, which allows us to swap the control signals from training-to-inference. In our experiments, the input channel of is 128D, while the motion controllable feature is 512D.

Figure 3. The decoder of our model.

Decoder. To improve the temporal correlation of the output motion, we use two lstm and a fully connected layer to decode and to predict the pdf of current frame (Fig. 3). Mathematically speaking,

(9)

where and represent the hidden state and cell memory of the first lstm layer, respectively, and are the second layer (Fig. 3). The hidden states and cell memories of the lstm ensure temporal correlation between output frames such that the output motion is smooth and realistic.

5.2. Training

Training loss. The output of tclstm is the pdf of frame , which we model as a gmm, with a loss defined as the negative log likelihood. Specifically,

(10)
(11)

where , , , is the output,

are the mean vector and co-variance matrix, respectively. Note that

is the ground-truth motion frame. To ensure temporal smoothness of the output motion, the smoothness loss is optimized for just the mean vector via . Note that the binary foot contact in is omitted from . Instead, we use the binary cross entropy (BCE) loss to compute the foot contact loss as , with and as the ground-truth and predicted foot contact, respectively.

In the end, our training loss can be described as a sum of losses:

(12)

where the balance parameters and .

Then, at inference, the generated motion frame can be obtained by sampling from the predicted pdf.

Implementation details. To achieve dance synthesis with and without the dance melody line control signal, we adopt a mix training strategy. That is, we pass the probabilistic input (i.e

., melody control signal) during training with a probability of 0.5. To obtain robustness, we apply data augmentation: (1) mirror transformations for additional dance motion, (2) added Gaussian noise (

i.e., and ) to the input and ground-truth to learn to handle temporal accumulation error, and (3) apply dropout (i.e., 0.4) at the input to resolve the problems of over-fitting. We initialize our model we use Xavier normal (Glorot and Bengio, 2010)

, and optimize via RMSprop 

(Tieleman and Hinton, 2012)

. Training runs for 500 epochs, starting with a learning rate of

, and then dropping by a factor of 10 at epoch 300. The batch-size is 128, setting each sample as a motion sequence of 600 continuous frames. Our system is implemented using PyTroch 1.2 on a PC with Intel I7 CPU, 32G RAM, and a GeForce-GTX 1080Ti.

6. Experiment

Figure 4. The user studies of different applications: (a) Random synthesis, we asked 10 users to score the synthesized dances of acLSTM, our method(no melody control signal), and the real dances(the dances in dataset); (b) Music2Dance (realism), we asked 10 users to score the realism of synthesized dances of DanceNet, our method(with music melody line), and the real dances; (c) Music2Dance (consistency), score the music-consistency of synthesized dances; (d) User control, score the realism and melody-consistency of synthesized dances of our method(user given melody line).

The proposed melody control signal allows it to be toggled on and off for different applications (i.e., random synthesis, Music2Dance, and user control). As far as we know, this is the first dance motion synthesizer proposed for a wide range of applications with the same model(the same parameters). Furthermore, this is the first user-controlled dance motion synthesizer for specific motions.

In this section, we describe each application, and compare with sota methods (Table 1). The sota random synthesis model (i.e., the ac-lstm (Li et al., 2017)), adopts a unique strategy (i.e., auto-condition) to train the lstm. However, it lacks controllability for dance motion. Tang et al. proposed an ae-based lstm (i.e., lstm-ae) to synthesize dance motion from music (Tang et al., 2018), but its synthesized dances are unrealistic and out of sync with the music. Furthermore, it lacks an ability to be used in other applications. DanceNet was proposed to synthesize dance from music (Zhuang et al., 2020). However, it lacks the ability to synthesize a variety of dance motions from the same music. Our model synthesizes realistic and diverse dance motions of different dance types, and the dance motions synthesized from same music are various. The animations of all applications are shown in video demo as part of the supplemental material.

Modern Dance Korean Dance
(lr)2-4(lr)5-7 FID Diversity-I Diversity-II FID Diversity-I Diversity-II
Real Dances 6.5 55.4 5.6 42.5
ac-lstm(Li et al., 2017) 23.4 41.1 7.8 22.5 28.9 7.5
Ours(w/o lstm decoder) 15.6 48.1 37.9 11.7 36.2 24.8
Ours 10.6 50.9 40.1 7.4 38.5 26.3
Table 2. Comparison of realism (FID; lower is better) and diversity (Diversity-I: synthesized conditioned different initial frames, Diversity-II: synthesized conditioned same initial frames; higher is better).
Figure 5. Example of random synthesis result.

6.1. Random synthesis

Given the dance type and initial dance motion frames(30 frames), our model can generate realistic and diverse dance sequences with its own style. With the same initial input frames, our model can generate different dance motion sequences, as shown in Figure 5. Since the predicted motion frame needs to be sampled from thepdfof the model output, which can effectively increase motion diversity. Then the sampled motion frame is fed back to the input to generate follow-up frame. To demonstrate our method, we compared with the sota model: ac-lstm (Li et al., 2017). We randomly synthesize 15 dance motion sequences for each dance type, and every three synthesized sequences share the same initial input frames. We evaluate the random synthesized dance motion from realism and diversity. The realism can be evaluated by Fréchet Inception Distance(FID)(Heusel et al., 2017), similar to (Yan et al., 2019; Lee et al., 2019)

. We adopt 3 temporal convolution layers and 1 Bi-lstm layer as a feature extractor to obtain the FID score since the FID needs an action classifier to extract dance features. Our method can synthesize diverse sequences with the different/same initial frames, so we can evaluate the diversity by the dance features extracted by the action classifier, that is, the average feature distance among different sequences. Diversity-I evaluates the diversity of synthesized different dance sequences conditioned on different initial frames, Diversity-II evaluates the diversity conditioned on same initial frames, and the result is shown in Table

2. For better comparison, we use user study to score the realism and diversity of dance(10 users), as shown in Figure 4a. Our model can synthesize more realistic dance motion sequences, close to real dance sequences. It is worth noting that the dances generated by our model are diverse, while ac-lstm (Li et al., 2017) can not synthesize diverse dances at all for the same initial frames. Their method can only synthesize the same dance with the same initial frames. One explanation is that their training strategy avoids the temporal accumulation error, but the model completely loses diversity(just overfit to the train data). Our method can achieve the random synthesis of diverse motion sequences. Because our model has complex and robust modelling capabilities and the output of our model is the probabilistic density (we need to sample for predicted frame), which increases dance diversity.

6.2. Music2Dance

Synthesizing music-consistent dance is an interesting and challenging task. It requires that the synthesized dance can be consistent with the music rhythm, style and melody. However, music and dance are weakly related, and it does not determine the specific dance posture, that is, music does not determine whether the dance moves are leg lifting, jumping, or circling. Therefore, how to establish a correlation between music and dance is very important. Tang et al. (Tang et al., 2018) synthesized dance directly from music, and did not explicitly establish the relationship between music and dance, so the synthesized dance is not realistic. Zhuang et al. (Zhuang et al., 2020) added music feature in the process of auto-regressive synthesis motion, but did not establish the relationship between music and dance.

How to determine the relationship between music and dance is the core difficulty of music2dance task. From professional choreographers, we know the relationship between music and dance is reflected in the melody and rhythm. Therefore, we construct the relationship between music and dance through the melody line of music and dance. In section 4.1, we propose the dance melody line to express the melody and rhythm of dance. In order to extract the music melody line, we introduce a simple and effective extraction method: extract onset strength by librosa (McFee et al., 2015) or madmom (Böck et al., 2016) and then smooth it through a Gaussian filter.

Figure 6. The melody lines of dance and music in music-dance pair dataset.

We select a segment from the music-dance pair data to obtain the melody lines of music and dance, and compare the relationship between them, as shown in Figure 6. Although the melody value of each frame is not necessarily the same, the change trend and peak value of the two melody lines are basically the same, indicating that the consistency of the melody rhythm between music and dance can be reflected through the melody line. In section 4.1, we introduce that the melody control signal of the model is the change trend of the melody line relative to the current frame, so we can directly adopt the change trend of the music melody line as the melody control signal.


Morden Dance Korean Dance
(lr)2-6(lr)7-11 FID Diversity-I Diversity-II Diversity-III Rhythm FID Diversity-I Diversity-II Diversity-III Rhythm
Real Dances 6.5 55.4 57.9% 5.6 42.5 68.3%
lstm-ae(Tang et al., 2018) 81.3 12.4 9.4 8.9 13.6% 75.6 10.2 7.9 7.6 15.1%
DanceNet(Zhuang et al., 2020) 15.2 49.3 40.1 7.6 56.7% 10.4 36.3 30.2 6.5 64.3%
Ours(w/o lstms decoder) 13.8 50.3 50.1 42.5 55.4% 10.1 35.3 32.7 24.9 67.3%
Ours 11.2 50.8 48.9 43.4 56.2% 7.8 36.5 31.9 26.4 66.8%

Table 3. Comparison of realism (FID), diversity (Diversity-I: synthesized conditioned different initial frames and different melody line, Diversity-II: synthesized conditioned same initial frames and different melody line, Diversity-III (Multi-modal): synthesized conditioned same initial frames and same melody line), rhythm-consistent (rhythm hit rate, higher is better).
Figure 7. Music2Dance result. Three dance sequences are synthesized conditioned on same initial frames and same music(melody line).

To demonstrate the effectiveness of our approach, we compare with two sota methods, lstm-ae (Tang et al., 2018) and DanceNet (Zhuang et al., 2020). We use the FID to evaluate the dance realism, and the average feature distance to evaluate the dance diversity. For this application, dance diversity is evaluated from three views: (1) Diversity-I, the diversity of the different synthesized dance sequences conditioned on different initial frames and different melody line; (2) Diversity-II, the diversity is conditioned on same initial frame and different melody lines; and (3) Diversity-III, the diversity is conditioned on same initial frame and the same melody line. Diversity-III reflects the multi-modality power of Music2Dance. In addition, we use the rhythm consistency (i.e., the rhythm hit rate) to evaluate our methods like in (Lee et al., 2019; Zhuang et al., 2020). We randomly select 5 initial sequences to synthesize 30 dance motion sequences for each dance type using the two methods mentioned and our approach. For each, three were synthesized conditioned on the same initial sequence and the same melody line, and three were synthesized conditioned on the same initial sequence and a different melody line. The quantitative results are shown in Table 3. We conduct a user study to evaluate the realism of the music-consistency (Fig. 4, b & c). Our results significantly outperform lstm-ae. We believe that directly mapping music to get the dance movement is unreasonable due to the weak correlation between dance and music. DanceNet (Zhuang et al., 2020) uses music features as the conditions to synthesize dance, but this method directly inputs music features without explicitly analyzing the correlation between music and dance. So, the model takes a long time to train (i.e

., ¿1,000 epochs). However, the consistency in the synthesized dance and the music is low. In addition, the output of DanceNet is a probability distribution - a model trained for too long would cause it to collapse (

i.e., it lacks in diversity), and especially the synthesized dance sequences conditioned on same initial frame. We explicitly analyze the relationship between music and dance represented by the melody lines. During training, the dance melody line is used as the control signal. Then, at inference, the music melody line is used as the control signal. This strategy ensures the controllable effect (i.e., music consistency) of the synthesized dance. More importantly, the highly coupled of the melody lines and low training difficulty (i.e., about 500 epochs), our method prevents the model from collapsing to synthesize various dances.

In addition to music in WAV format, our method also works with audio in other formats (e.g., MIDI, an electronic music format): melody lines are captured as a temporal sequence, as it is easier to obtain melody line (i.e., obtained from the change of music note).

Figure 8. User controls - the melody line is drawn by users.

6.3. User control

Given the dance type, our model synthesizes realistic, diverse dance motions conditioned on the melody line. The melody line is a simple 1D signal (Fig. 26, and 7). Thus, allowing an ordinary user to create dances, opposed to depending on a professional choreographer - there is a variety of ways the melody line can be described: e.g., drawing (Fig. 8). We built an end-to-end system based on two steps: (1) the user draws the melody line via mouse inputs and (2) the synthesizer generates a dance according to the melody line. Note that since the lines drawn by the user are evenly sampled as melody lines, our model synthesizes melody-consistent dance motions.

In the end, this is the first model with such controls for synthesizing dances. Thus, we evaluate our method by user study. We asked 10 users draw 3 melody lines. Then, they scored each dance sequence separately by measuring dance realism and melody-consistency (Fig. 4 d). Our method synthesizes realistic dances, while ensuring melody-consistency, especially for the modern dance.

6.4. Discussion and future work

Ablation study. We propose the tclstm model, which is composed of two parts: an encoder (temporal convolution) and a decoder (lstm). In order to verify the ability of our model, we conducted a comparative experiment. We directly use the temporal convolution model (without lstm) to model the dance, and adopt same training strategy. We perform quantitative evaluations on two different applications, random synthesis and Music2Dance (Table 2 and 3). When the lstms is omitted from the decoder, the synthesized dances worsen, and especially the in realism (i.e., FID), which shows that building temporal dependencies via lstms in decoder improves modeling capabilities.

Result discussion. Our method realizes different dance synthesis applications (i.e., more than the three applications described above). For example, musical notation synthesis dance (i.e., to extract a melody line from musical notation). For these applications, we found a worthwhile phenomenon in the experiment. That is, the melody-consistency (i.e., controllable effect) of modern dance is superior to that in Korean dance; also, the diversity of Korean dance is inferior to modern dance. The first reason lies in the dance data. The dance steps of Korean dance in dataset are inconsistently distributed, which makes it difficult to model such unevenly distributed data. The second reason is that the rhythm and melody of Korean dance is very fast, which causes poor temporal smoothness and high dynamic complexity.

Future work. To the best of our knowledge, we are the first to propose a generative model with controllable dance synthesis via the simple and effective use of 1D control signals. However, there are some topics worth discussing: i.e., how to quantitatively evaluate the controls effectiveness, mediocre modeling ability for fast-rhythm Korean dance (mentioned above), the foot sliding (it is difficult to solve by IK for complex dance motion). The topics raised here are subject of future work.

7. Conclusion

We introduced a novel generative model, tclstm. Based on temporal cnn and lstm, tclstm synthesizes realistic, diverse dances (i.e., motion sequence). Our model can handle different dance types for various applications: random synthesis, music2dance, and user control. We demonstrated quantitative results and user studies establishing the effectiveness of our method, and our model can synthesize more realistic and diverse dance motion sequences, achieving state-of-the-art results.

References

  • S. Böck, F. Korzeniowski, J. Schlüter, F. Krebs, and G. Widmer (2016) Madmom: a new python audio and music signal processing library. In Proceedings of the 24th ACM international conference on Multimedia, pp. 1174–1178. Cited by: §6.2.
  • R. Bowden (2000) Learning statistical models of human motion. In IEEE Workshop on Human Modeling, Analysis and Synthesis, CVPR, Vol. 2000. Cited by: §2.
  • J. Boyd (2004) Dance, culture, and popular film. Feminist Media Studies 4 (1), pp. 67–83. Cited by: §1.
  • M. Brand and A. Hertzmann (2000) Style machines. In Proceedings of the 27th annual conference on Computer graphics and interactive techniques, pp. 183–192. Cited by: §2.
  • J. Chai and J. K. Hodgins (2005) Performance animation from low-dimensional control signals. ACM Transactions on Graphics (ToG) 24 (3), pp. 686–696. Cited by: §2.
  • J. Chai and J. K. Hodgins (2007) Constraint-based motion optimization using a statistical dynamic model. In ACM Transactions on Graphics (TOG), Vol. 26, pp. 8. Cited by: §2.
  • R. Fan, S. Xu, and W. Geng (2011) Example-based automatic music-driven conventional dance motion synthesis. IEEE transactions on visualization and computer graphics 18 (3), pp. 501–515. Cited by: §1, §2, §4.1.
  • K. Fragkiadaki, S. Levine, P. Felsen, and J. Malik (2015) Recurrent network models for human dynamics. In

    Proceedings of the IEEE International Conference on Computer Vision

    ,
    pp. 4346–4354. Cited by: §1, §2, §2.
  • X. Glorot and Y. Bengio (2010) Understanding the difficulty of training deep feedforward neural networks. In

    Proceedings of the thirteenth international conference on artificial intelligence and statistics

    ,
    pp. 249–256. Cited by: §5.2.
  • F. S. Grassia (1998) Practical parameterization of rotations using the exponential map. Journal of graphics tools 3 (3), pp. 29–48. Cited by: §4.2.
  • K. Grochow, S. L. Martin, A. Hertzmann, and Z. Popović (2004) Style-based inverse kinematics. In ACM SIGGRAPH 2004 Papers, pp. 522–531. Cited by: §2.
  • M. Heusel, H. Ramsauer, T. Unterthiner, B. Nessler, and S. Hochreiter (2017) Gans trained by a two time-scale update rule converge to a local nash equilibrium. In Advances in neural information processing systems, pp. 6626–6637. Cited by: §6.1.
  • A. Hewitt (2005) Social choreography: ideology as performance in dance and everyday movement. Duke University Press. Cited by: §4.1.
  • D. Holden, T. Komura, and J. Saito (2017) Phase-functioned neural networks for character control. ACM Transactions on Graphics (TOG) 36 (4), pp. 1–13. Cited by: §2, §4.2.
  • D. Holden, J. Saito, and T. Komura (2016) A deep learning framework for character motion synthesis and editing. ACM Transactions on Graphics (TOG) 35 (4), pp. 138. Cited by: §2, §4.2.
  • A. Jain, A. R. Zamir, S. Savarese, and A. Saxena (2016) Structural-rnn: deep learning on spatio-temporal graphs. In

    Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition

    ,
    pp. 5308–5317. Cited by: §2, §2.
  • T. Kim, S. I. Park, and S. Y. Shin (2003) Rhythmic-motion synthesis based on motion-beat analysis. ACM Transactions on Graphics (TOG) 22 (3), pp. 392–401. Cited by: §4.1.
  • L. Kovar, M. Gleicher, and F. Pighin (2008) Motion graphs. In ACM SIGGRAPH 2008 classes, pp. 51. Cited by: §2.
  • M. Lau, Z. Bar-Joseph, and J. Kuffner (2009) Modeling spatial and temporal variation in motion data. In ACM Transactions on Graphics (TOG), Vol. 28, pp. 171. Cited by: §2.
  • H. Lee, X. Yang, M. Liu, T. Wang, Y. Lu, M. Yang, and J. Kautz (2019) Dancing to music. In Advances in Neural Information Processing Systems, pp. 3581–3591. Cited by: §1, §1, §2, §6.1, §6.2.
  • J. Lee, J. Chai, P. S. Reitsma, J. K. Hodgins, and N. S. Pollard (2002) Interactive control of avatars animated with human motion data. In Proceedings of the 29th annual conference on Computer graphics and interactive techniques, pp. 491–500. Cited by: §2, §2, §4.1.
  • K. Lee, S. Lee, and J. Lee (2018) Interactive character animation by learning multi-objective control. ACM Transactions on Graphics (TOG) 37 (6), pp. 1–10. Cited by: §1, §2, §2.
  • M. Lee, K. Lee, and J. Park (2013) Music similarity-based approach to generating dance motion sequence. Multimedia tools and applications 62 (3), pp. 895–912. Cited by: §1.
  • Y. Li, T. Wang, and H. Shum (2002) Motion texture: a two-level statistical model for character motion synthesis. In ACM transactions on graphics (ToG), Vol. 21, pp. 465–472. Cited by: §1, §2, §2.
  • Z. Li, Y. Zhou, S. Xiao, C. He, and H. Li (2017) Auto-conditioned lstm network for extended complex human motion synthesis. arXiv preprint arXiv:1707.05363 3. Cited by: Table 1, §1, §1, §2, §2, §2, §6.1, Table 2, §6.
  • J. Martinez, M. J. Black, and J. Romero (2017)

    On human motion prediction using recurrent neural networks

    .
    In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2891–2900. Cited by: §1, §2, §2.
  • P. H. Mason (2012) Music, dance and the total art work: choreomusicology in theory and practice. Research in dance education 13 (1), pp. 5–24. Cited by: §4.1.
  • B. McFee, C. Raffel, D. Liang, D. P. Ellis, M. McVicar, E. Battenberg, and O. Nieto (2015) Librosa: audio and music signal analysis in python. In Proceedings of the 14th python in science conference, Vol. 8. Cited by: §6.2.
  • J. Min and J. Chai (2012) Motion graphs++: a compact generative model for semantic motion analysis and synthesis. ACM Transactions on Graphics (TOG) 31 (6), pp. 153. Cited by: §1, §2.
  • F. Ofli, E. Erzin, Y. Yemez, and A. M. Tekalp (2011) Learn2dance: learning statistical music-to-dance mappings for choreography synthesis. IEEE Transactions on Multimedia 14 (3), pp. 747–759. Cited by: §1.
  • X. B. Peng, P. Abbeel, S. Levine, and M. van de Panne (2018)

    Deepmimic: example-guided deep reinforcement learning of physics-based character skills

    .
    ACM Transactions on Graphics (TOG) 37 (4), pp. 143. Cited by: §1.
  • X. B. Peng, G. Berseth, K. Yin, and M. Van De Panne (2017) Deeploco: dynamic locomotion skills using hierarchical deep reinforcement learning. ACM Transactions on Graphics (TOG) 36 (4), pp. 41. Cited by: §1.
  • A. Safonova and J. K. Hodgins (2007)

    Construction and optimal search of interpolated motion graphs

    .
    ACM Transactions on Graphics (TOG) 26 (3), pp. 106. Cited by: §1, §2.
  • T. Shiratori, A. Nakazawa, and K. Ikeuchi (2006) Dancing-to-music character animation. In Computer Graphics Forum, Vol. 25, pp. 449–458. Cited by: §2.
  • T. Tang, J. Jia, and H. Mao (2018) Dance with melody: an lstm-autoencoder approach to music-oriented dance synthesis. In 2018 ACM Multimedia Conference on Multimedia Conference, pp. 1598–1606. Cited by: Table 1, §1, §2, §6.2, §6.2, Table 3, §6.
  • T. Tieleman and G. Hinton (2012) Lecture 6.5-rmsprop: divide the gradient by a running average of its recent magnitude.

    COURSERA: Neural networks for machine learning

    4 (2), pp. 26–31.
    Cited by: §5.2.
  • X. Wei, J. Min, and J. Chai (2011) Physically valid statistical models for human motion generation. ACM Transactions on Graphics (TOG) 30 (3), pp. 1–10. Cited by: §2.
  • S. Xia, C. Wang, J. Chai, and J. Hodgins (2015) Realtime style transfer for unlabeled heterogeneous human motion. ACM Transactions on Graphics (TOG) 34 (4), pp. 119. Cited by: §2.
  • S. Yan, Z. Li, Y. Xiong, H. Yan, and D. Lin (2019) Convolutional sequence generation for skeleton-based action synthesis. In Proceedings of the IEEE International Conference on Computer Vision, pp. 4394–4402. Cited by: §6.1.
  • W. Zhuang, C. Wang, S. Xia, J. Chai, and Y. Wang (2020) Music2Dance: music-driven dance generation using wavenet. arXiv preprint arXiv:2002.03761. Cited by: Table 1, §1, §1, §1, §1, §1, §1, §2, §2, §4.2, §4, §5.1, §6.2, §6.2, Table 3, §6.